Hi All,
64-bit Spur can usefully provide an immediate float, a 61-bit subset of the ieee double precision float. The scheme steals bits from the mantissa to use for the immediate's 3-bit tag pattern. So values have the same precision as ieee doubles, but can only represent the subset with exponents between 10^-38 and 10^38, the single-precision range. The issue here is how to organize the class hierarchy.
The approach that looks best to me is to modify class Float to be an abstract class, and add two subclasses, BoxedFloat and SmallFloat, such that existing boxed instances of Float outside the SmallFloat range will become instances of BoxedFloat and instances within that range will be replaced by references to the relevant SmallFloat.
With this approach ...
- Float pi etc can still be used, even though they will answer instances of SmallFloat. But tests such as "self assert: result class == Float." will need to be rewritten to e.g. "self assert: result isFloat".
- BoxedFloat and SmallFloat will not be mentioned much at all since floats print themselves literally, and so the fact that the classes have changed won't be obvious.
- the boxed Float primitives (receiver is a boxed float) live in BoxedFloat and the immediate ones live in SmallFloat. Making SmallFloat a subclass of Float poses problems for all the primitives that do a super send to retry, since the boxed Float prims will be above the unboxed ones and so the boxed ones would have to test for an immediate receiver.
An alternative, that VW took (because it has both Float and Double) is to add a superclass, e.g. LimitedPrecisionReal, move most of the methods into it, and keep Float as Float, and add SmallFloat as a subclass of LimitedPrecisionReal. Then while class-side methods such as pi would likely be implemented in LimitedPrecisionReal class, sends to Float to access them find them via inheritance. An automatic reorganization which moves only primitives out of LimitedPrecisionReal is easy to write.
Thoughts?
On Thu, Nov 20, 2014 at 05:51:42PM -0800, Eliot Miranda wrote:
Hi All,
64-bit Spur can usefully provide an immediate float, a 61-bit subset of
the ieee double precision float. The scheme steals bits from the mantissa to use for the immediate's 3-bit tag pattern. So values have the same precision as ieee doubles, but can only represent the subset with exponents between 10^-38 and 10^38, the single-precision range. The issue here is how to organize the class hierarchy.
The approach that looks best to me is to modify class Float to be an abstract class, and add two subclasses, BoxedFloat and SmallFloat, such that existing boxed instances of Float outside the SmallFloat range will become instances of BoxedFloat and instances within that range will be replaced by references to the relevant SmallFloat.
With this approach ...
- Float pi etc can still be used, even though they will answer instances of
SmallFloat. But tests such as "self assert: result class == Float." will need to be rewritten to e.g. "self assert: result isFloat".
- BoxedFloat and SmallFloat will not be mentioned much at all since floats
print themselves literally, and so the fact that the classes have changed won't be obvious.
- the boxed Float primitives (receiver is a boxed float) live in BoxedFloat
and the immediate ones live in SmallFloat. Making SmallFloat a subclass of Float poses problems for all the primitives that do a super send to retry, since the boxed Float prims will be above the unboxed ones and so the boxed ones would have to test for an immediate receiver.
An alternative, that VW took (because it has both Float and Double) is to add a superclass, e.g. LimitedPrecisionReal, move most of the methods into it, and keep Float as Float, and add SmallFloat as a subclass of LimitedPrecisionReal. Then while class-side methods such as pi would likely be implemented in LimitedPrecisionReal class, sends to Float to access them find them via inheritance. An automatic reorganization which moves only primitives out of LimitedPrecisionReal is easy to write.
Thoughts?
I have always felt that the mapping of Float to 64-bit double and FloatArray to 32-bit float is awkward. It may be that 32-bit floats are becoming less relevant nowadays, but if short float values are still important, then it would be nice to be able to represent them directly. I like the idea of having a Float class and a Double class to represent the two most common representations. A class hierarchy that could potentially support this sounds like a good idea to me.
I have no experience with VW, but a LimitedPrecisionReal hierachy sounds like a reasonable approach.
Dave
On 21.11.2014, at 04:19, David T. Lewis lewis@mail.msen.com wrote:
On Thu, Nov 20, 2014 at 05:51:42PM -0800, Eliot Miranda wrote:
Hi All,
64-bit Spur can usefully provide an immediate float, a 61-bit subset of the ieee double precision float. The scheme steals bits from the mantissa to use for the immediate's 3-bit tag pattern. So values have the same precision as ieee doubles, but can only represent the subset with exponents between 10^-38 and 10^38, the single-precision range. The issue here is how to organize the class hierarchy.
The approach that looks best to me is to modify class Float to be an abstract class, and add two subclasses, BoxedFloat and SmallFloat, such that existing boxed instances of Float outside the SmallFloat range will become instances of BoxedFloat and instances within that range will be replaced by references to the relevant SmallFloat.
With this approach ...
- Float pi etc can still be used, even though they will answer instances of
SmallFloat. But tests such as "self assert: result class == Float." will need to be rewritten to e.g. "self assert: result isFloat".
- BoxedFloat and SmallFloat will not be mentioned much at all since floats
print themselves literally, and so the fact that the classes have changed won't be obvious.
- the boxed Float primitives (receiver is a boxed float) live in BoxedFloat
and the immediate ones live in SmallFloat. Making SmallFloat a subclass of Float poses problems for all the primitives that do a super send to retry, since the boxed Float prims will be above the unboxed ones and so the boxed ones would have to test for an immediate receiver.
An alternative, that VW took (because it has both Float and Double) is to add a superclass, e.g. LimitedPrecisionReal, move most of the methods into it, and keep Float as Float, and add SmallFloat as a subclass of LimitedPrecisionReal. Then while class-side methods such as pi would likely be implemented in LimitedPrecisionReal class, sends to Float to access them find them via inheritance. An automatic reorganization which moves only primitives out of LimitedPrecisionReal is easy to write.
Thoughts?
I have always felt that the mapping of Float to 64-bit double and FloatArray to 32-bit float is awkward. It may be that 32-bit floats are becoming less relevant nowadays, but if short float values are still important, then it would be nice to be able to represent them directly. I like the idea of having a Float class and a Double class to represent the two most common representations. A class hierarchy that could potentially support this sounds like a good idea to me.
I have no experience with VW, but a LimitedPrecisionReal hierachy sounds like a reasonable approach.
Dave
I'd suggest BoxedDouble and ImmediateDouble as names for the concrete subclasses (*). Names do mean something. (**)
You're right about the FloatArray confusion. However, note that the IEEE standard calls it single and double. It's only C using "float" to mean "single precision".
I'd name the abstract superclass Float, for readability, and the isFloat test etc. Also: "Float pi" reads a lot nicer than anything else. I don't see the need for having a deep LimitedPrecisionReal - Float - BoxedDouble/ImmediateDouble deep hierarchy now.
If we ever add single-precision floats, we should name them BoxedSingle and ImmediateSingle. At that point we might want a Single superclass and a LimitedPrecisionReal supersuperclass, but we can cross that bridge when we get there.
- Bert -
(*) Since we're not going to see the class names often, we could even spell it out as BoxedDoublePrecisionFloat and ImmediateDoublePrecisionFloat. Only half joking. It would make the relation to the abstract Float very clear.
(**) We could also try to make the names googleable. I was surprised to not get a good hit for "boxed immediate". Only "boxed unboxed" finds it. Maybe there are two better words?
Quoting Bert Freudenberg bert@freudenbergs.de:
On 21.11.2014, at 04:19, David T. Lewis lewis@mail.msen.com wrote:
On Thu, Nov 20, 2014 at 05:51:42PM -0800, Eliot Miranda wrote:
Hi All,
64-bit Spur can usefully provide an immediate float, a 61-bit subset of the ieee double precision float. The scheme steals bits from the mantissa to use for the immediate's 3-bit tag pattern. So values have the same precision as ieee doubles, but can only represent the subset with exponents between 10^-38 and 10^38, the single-precision range. The issue here is how to organize the class hierarchy.
The approach that looks best to me is to modify class Float to be an abstract class, and add two subclasses, BoxedFloat and SmallFloat, such that existing boxed instances of Float outside the SmallFloat range will become instances of BoxedFloat and instances within that range will be replaced by references to the relevant SmallFloat.
With this approach ...
- Float pi etc can still be used, even though they will answer instances of
SmallFloat. But tests such as "self assert: result class == Float." will need to be rewritten to e.g. "self assert: result isFloat".
- BoxedFloat and SmallFloat will not be mentioned much at all since floats
print themselves literally, and so the fact that the classes have changed won't be obvious.
- the boxed Float primitives (receiver is a boxed float) live in BoxedFloat
and the immediate ones live in SmallFloat. Making SmallFloat a subclass of Float poses problems for all the primitives that do a super send to retry, since the boxed Float prims will be above the unboxed ones and so the boxed ones would have to test for an immediate receiver.
An alternative, that VW took (because it has both Float and Double) is to add a superclass, e.g. LimitedPrecisionReal, move most of the methods into it, and keep Float as Float, and add SmallFloat as a subclass of LimitedPrecisionReal. Then while class-side methods such as pi would likely be implemented in LimitedPrecisionReal class, sends to Float to access them find them via inheritance. An automatic reorganization which moves only primitives out of LimitedPrecisionReal is easy to write.
Thoughts?
I have always felt that the mapping of Float to 64-bit double and FloatArray to 32-bit float is awkward. It may be that 32-bit floats are becoming less relevant nowadays, but if short float values are still important, then it would be nice to be able to represent them directly. I like the idea of having a Float class and a Double class to represent the two most common representations. A class hierarchy that could potentially support this sounds like a good idea to me.
I have no experience with VW, but a LimitedPrecisionReal hierachy sounds like a reasonable approach.
Dave
I'd suggest BoxedDouble and ImmediateDouble as names for the concrete subclasses (*). Names do mean something. (**)
You're right about the FloatArray confusion. However, note that the IEEE standard calls it single and double. It's only C using "float" to mean "single precision".
I'd name the abstract superclass Float, for readability, and the isFloat test etc. Also: "Float pi" reads a lot nicer than anything else. I don't see the need for having a deep LimitedPrecisionReal - Float - BoxedDouble/ImmediateDouble deep hierarchy now.
If we ever add single-precision floats, we should name them BoxedSingle and ImmediateSingle. At that point we might want a Single superclass and a LimitedPrecisionReal supersuperclass, but we can cross that bridge when we get there.
- Bert -
(*) Since we're not going to see the class names often, we could even spell it out as BoxedDoublePrecisionFloat and ImmediateDoublePrecisionFloat. Only half joking. It would make the relation to the abstract Float very clear.
(**) We could also try to make the names googleable. I was surprised to not get a good hit for "boxed immediate". Only "boxed unboxed" finds it. Maybe there are two better words?
I very much agree with Bert. But I'd suggest SmallDouble instead of ImmediateDouble for consistency with SmallInteger.
Cheers, Juan Vuletich
On 21.11.2014, at 13:29, J. Vuletich (mail lists) juanlists@jvuletich.org wrote:
Quoting Bert Freudenberg bert@freudenbergs.de:
I'd suggest BoxedDouble and ImmediateDouble as names for the concrete subclasses (*). Names do mean something. (**)
You're right about the FloatArray confusion. However, note that the IEEE standard calls it single and double. It's only C using "float" to mean "single precision".
I'd name the abstract superclass Float, for readability, and the isFloat test etc. Also: "Float pi" reads a lot nicer than anything else. I don't see the need for having a deep LimitedPrecisionReal - Float - BoxedDouble/ImmediateDouble deep hierarchy now.
If we ever add single-precision floats, we should name them BoxedSingle and ImmediateSingle. At that point we might want a Single superclass and a LimitedPrecisionReal supersuperclass, but we can cross that bridge when we get there.
- Bert -
(*) Since we're not going to see the class names often, we could even spell it out as BoxedDoublePrecisionFloat and ImmediateDoublePrecisionFloat. Only half joking. It would make the relation to the abstract Float very clear.
(**) We could also try to make the names googleable. I was surprised to not get a good hit for "boxed immediate". Only "boxed unboxed" finds it. Maybe there are two better words?
I very much agree with Bert. But I'd suggest SmallDouble instead of ImmediateDouble for consistency with SmallInteger.
Then it would have to be LargeDouble for consistency with LargeInteger, too. Which I don't find compelling.
Also, with the 64 bit format we get many more immediate objects. There already are immediate integers and characters, floats will be the third, there could be more, like immediate points. For those, the small/large distinction does not make sense.
Maybe Eliot's idea of keeping "Float" in the name was best, but instead of "small" use "immediate":
Float - BoxedFloat - ImmediateFloat
A Float is either a BoxedFloat or an ImmediateFloat, depending on the magnitude of its exponent.
- Bert -
Hi,
On 21.11.2014, at 13:44, Bert Freudenberg bert@freudenbergs.de wrote:
On 21.11.2014, at 13:29, J. Vuletich (mail lists) juanlists@jvuletich.org wrote:
Quoting Bert Freudenberg bert@freudenbergs.de:
I'd suggest BoxedDouble and ImmediateDouble as names for the concrete subclasses (*). Names do mean something. (**)
You're right about the FloatArray confusion. However, note that the IEEE standard calls it single and double. It's only C using "float" to mean "single precision".
I'd name the abstract superclass Float, for readability, and the isFloat test etc. Also: "Float pi" reads a lot nicer than anything else. I don't see the need for having a deep LimitedPrecisionReal - Float - BoxedDouble/ImmediateDouble deep hierarchy now.
If we ever add single-precision floats, we should name them BoxedSingle and ImmediateSingle. At that point we might want a Single superclass and a LimitedPrecisionReal supersuperclass, but we can cross that bridge when we get there.
- Bert -
(*) Since we're not going to see the class names often, we could even spell it out as BoxedDoublePrecisionFloat and ImmediateDoublePrecisionFloat. Only half joking. It would make the relation to the abstract Float very clear.
(**) We could also try to make the names googleable. I was surprised to not get a good hit for "boxed immediate". Only "boxed unboxed" finds it. Maybe there are two better words?
I very much agree with Bert. But I'd suggest SmallDouble instead of ImmediateDouble for consistency with SmallInteger.
Then it would have to be LargeDouble for consistency with LargeInteger, too. Which I don't find compelling.
Also, with the 64 bit format we get many more immediate objects. There already are immediate integers and characters, floats will be the third, there could be more, like immediate points. For those, the small/large distinction does not make sense.
Maybe Eliot's idea of keeping "Float" in the name was best, but instead of "small" use "immediate":
Float - BoxedFloat - ImmediateFloat
A Float is either a BoxedFloat or an ImmediateFloat, depending on the magnitude of its exponent.
I don't like the idea of putting a VM/Storage detail into the Class name. The running system itself does not care about whether Floats or Integers are boxed or immediate. For example in RSqueakVM (aka SPy) there is no immediate Integer whatsoever. Yes, tagged ints are read during image startup but they aren't subsequently represented as immediates or tagged ints after that.
Just as input, in the Racket language and other Schemes, the equivalent to our SmallInterger/LargeInteger is fixnum/bignum and for floats they have flonums and "extflonums" (80bit).
Best -Tobias
Hi Tobias,
On Fri, Nov 21, 2014 at 4:51 AM, Tobias Pape Das.Linux@gmx.de wrote:
Hi,
On 21.11.2014, at 13:44, Bert Freudenberg bert@freudenbergs.de wrote:
On 21.11.2014, at 13:29, J. Vuletich (mail lists) <
juanlists@jvuletich.org> wrote:
Quoting Bert Freudenberg bert@freudenbergs.de:
I'd suggest BoxedDouble and ImmediateDouble as names for the concrete
subclasses (*). Names do mean something. (**)
You're right about the FloatArray confusion. However, note that the
IEEE standard calls it single and double. It's only C using "float" to mean "single precision".
I'd name the abstract superclass Float, for readability, and the
isFloat test etc. Also: "Float pi" reads a lot nicer than anything else. I don't see the need for having a deep LimitedPrecisionReal - Float - BoxedDouble/ImmediateDouble deep hierarchy now.
If we ever add single-precision floats, we should name them
BoxedSingle and ImmediateSingle. At that point we might want a Single superclass and a LimitedPrecisionReal supersuperclass, but we can cross that bridge when we get there.
- Bert -
(*) Since we're not going to see the class names often, we could even
spell it out as BoxedDoublePrecisionFloat and ImmediateDoublePrecisionFloat. Only half joking. It would make the relation to the abstract Float very clear.
(**) We could also try to make the names googleable. I was surprised
to not get a good hit for "boxed immediate". Only "boxed unboxed" finds it. Maybe there are two better words?
I very much agree with Bert. But I'd suggest SmallDouble instead of
ImmediateDouble for consistency with SmallInteger.
Then it would have to be LargeDouble for consistency with LargeInteger,
too. Which I don't find compelling.
Also, with the 64 bit format we get many more immediate objects. There
already are immediate integers and characters, floats will be the third, there could be more, like immediate points. For those, the small/large distinction does not make sense.
Maybe Eliot's idea of keeping "Float" in the name was best, but instead
of "small" use "immediate":
Float - BoxedFloat - ImmediateFloat A Float is either a BoxedFloat or an ImmediateFloat, depending on
the magnitude of its exponent.
I don't like the idea of putting a VM/Storage detail into the Class name. The running system itself does not care about whether Floats or Integers are boxed or immediate.
I disagree. I think at least Smalltalk-80 has a philosophy of lifting as much out of the VM into the system, and hiding it from clients via encapsulation. So unlike many other VMs the compiler is in the system, the system explicitly separates SmallInteger, LargePositiveInteger and LargeNegativeInteger and implements large integer arithmetic with Smalltalk code that uses SmallIntegers. Note that the primitives are an optional extra optimization that the VM does not need to implement. So for me it is in keeping with the current system to use BoxedFloat and SmallFloat or BoxedDouble and SmallDouble.
This lifting things up provides us with an extremely malleable system. Pushing things down into the VM does the opposite.
For example in RSqueakVM (aka SPy) there is no immediate
Integer whatsoever. Yes, tagged ints are read during image startup but they aren't subsequently represented as immediates or tagged ints after that.
Well because it's implemented above RPython I guess it is using Python's bignum code directly. That's fine but its a bit of a cheat.
Just as input, in the Racket language and other Schemes, the equivalent to our SmallInterger/LargeInteger is fixnum/bignum and for floats they have flonums and "extflonums" (80bit).
Best -Tobias
thanks
Hi Eliot
On 21.11.2014, at 19:06, Eliot Miranda eliot.miranda@gmail.com wrote:
Hi Tobias,
On Fri, Nov 21, 2014 at 4:51 AM, Tobias Pape Das.Linux@gmx.de wrote:
Hi,
On 21.11.2014, at 13:44, Bert Freudenberg bert@freudenbergs.de wrote:
On 21.11.2014, at 13:29, J. Vuletich (mail lists) <
juanlists@jvuletich.org> wrote:
Quoting Bert Freudenberg bert@freudenbergs.de:
I'd suggest BoxedDouble and ImmediateDouble as names for the concrete
subclasses (*). Names do mean something. (**)
You're right about the FloatArray confusion. However, note that the
IEEE standard calls it single and double. It's only C using "float" to mean "single precision".
I'd name the abstract superclass Float, for readability, and the
isFloat test etc. Also: "Float pi" reads a lot nicer than anything else. I don't see the need for having a deep LimitedPrecisionReal - Float - BoxedDouble/ImmediateDouble deep hierarchy now.
If we ever add single-precision floats, we should name them
BoxedSingle and ImmediateSingle. At that point we might want a Single superclass and a LimitedPrecisionReal supersuperclass, but we can cross that bridge when we get there.
- Bert -
(*) Since we're not going to see the class names often, we could even
spell it out as BoxedDoublePrecisionFloat and ImmediateDoublePrecisionFloat. Only half joking. It would make the relation to the abstract Float very clear.
(**) We could also try to make the names googleable. I was surprised
to not get a good hit for "boxed immediate". Only "boxed unboxed" finds it. Maybe there are two better words?
I very much agree with Bert. But I'd suggest SmallDouble instead of
ImmediateDouble for consistency with SmallInteger.
Then it would have to be LargeDouble for consistency with LargeInteger,
too. Which I don't find compelling.
Also, with the 64 bit format we get many more immediate objects. There
already are immediate integers and characters, floats will be the third, there could be more, like immediate points. For those, the small/large distinction does not make sense.
Maybe Eliot's idea of keeping "Float" in the name was best, but instead
of "small" use "immediate":
Float - BoxedFloat - ImmediateFloat A Float is either a BoxedFloat or an ImmediateFloat, depending on
the magnitude of its exponent.
I don't like the idea of putting a VM/Storage detail into the Class name. The running system itself does not care about whether Floats or Integers are boxed or immediate.
I disagree. I think at least Smalltalk-80 has a philosophy of lifting as much out of the VM into the system, and hiding it from clients via encapsulation. So unlike many other VMs the compiler is in the system, the system explicitly separates SmallInteger, LargePositiveInteger and LargeNegativeInteger and implements large integer arithmetic with Smalltalk code that uses SmallIntegers. Note that the primitives are an optional extra optimization that the VM does not need to implement. So for me it is in keeping with the current system to use BoxedFloat and SmallFloat or BoxedDouble and SmallDouble.
This lifting things up provides us with an extremely malleable system. Pushing things down into the VM does the opposite.
I can understand. It is however a tradeoff between abstraction and malleability. I'd rather see everything done in Smalltalk. Yet primitives not exposing its innards.
I think I just can't have the cake and have it, too.
For example in RSqueakVM (aka SPy) there is no immediate
Integer whatsoever. Yes, tagged ints are read during image startup but they aren't subsequently represented as immediates or tagged ints after that.
Well because it's implemented above RPython I guess it is using Python's bignum code directly. That's fine but its a bit of a cheat.
I was talking of SmallInts here. There is no bignum code involved. On the RPython side, SmallIntegers are just objects that have a field that is not accessible from Smalltalk but keeps a machine-word integer.
Just as input, in the Racket language and other Schemes, the equivalent to our SmallInterger/LargeInteger is fixnum/bignum and for floats they have flonums and "extflonums" (80bit).
Best -Tobias
Hi Tobias,
On Fri, Nov 21, 2014 at 10:52 AM, Tobias Pape Das.Linux@gmx.de wrote:
Hi Eliot
On 21.11.2014, at 19:06, Eliot Miranda eliot.miranda@gmail.com wrote:
Hi Tobias,
On Fri, Nov 21, 2014 at 4:51 AM, Tobias Pape Das.Linux@gmx.de wrote:
Hi,
On 21.11.2014, at 13:44, Bert Freudenberg bert@freudenbergs.de
wrote:
On 21.11.2014, at 13:29, J. Vuletich (mail lists) <
juanlists@jvuletich.org> wrote:
Quoting Bert Freudenberg bert@freudenbergs.de:
I'd suggest BoxedDouble and ImmediateDouble as names for the
concrete
subclasses (*). Names do mean something. (**)
You're right about the FloatArray confusion. However, note that the
IEEE standard calls it single and double. It's only C using "float" to
mean
"single precision".
I'd name the abstract superclass Float, for readability, and the
isFloat test etc. Also: "Float pi" reads a lot nicer than anything
else. I
don't see the need for having a deep LimitedPrecisionReal - Float - BoxedDouble/ImmediateDouble deep hierarchy now.
If we ever add single-precision floats, we should name them
BoxedSingle and ImmediateSingle. At that point we might want a Single superclass and a LimitedPrecisionReal supersuperclass, but we can cross that bridge when we get there.
- Bert -
(*) Since we're not going to see the class names often, we could
even
spell it out as BoxedDoublePrecisionFloat and ImmediateDoublePrecisionFloat. Only half joking. It would make the
relation
to the abstract Float very clear.
(**) We could also try to make the names googleable. I was
surprised
to not get a good hit for "boxed immediate". Only "boxed unboxed"
finds it.
Maybe there are two better words?
I very much agree with Bert. But I'd suggest SmallDouble instead of
ImmediateDouble for consistency with SmallInteger.
Then it would have to be LargeDouble for consistency with
LargeInteger,
too. Which I don't find compelling.
Also, with the 64 bit format we get many more immediate objects.
There
already are immediate integers and characters, floats will be the
third,
there could be more, like immediate points. For those, the small/large distinction does not make sense.
Maybe Eliot's idea of keeping "Float" in the name was best, but
instead
of "small" use "immediate":
Float - BoxedFloat - ImmediateFloat A Float is either a BoxedFloat or an ImmediateFloat, depending
on
the magnitude of its exponent.
I don't like the idea of putting a VM/Storage detail into the Class
name.
The running system itself does not care about whether Floats or
Integers
are boxed or immediate.
I disagree. I think at least Smalltalk-80 has a philosophy of lifting as much out of the VM into the system, and hiding it from clients via encapsulation. So unlike many other VMs the compiler is in the system,
the
system explicitly separates SmallInteger, LargePositiveInteger and LargeNegativeInteger and implements large integer arithmetic with
Smalltalk
code that uses SmallIntegers. Note that the primitives are an optional extra optimization that the VM does not need to implement. So for me it
is
in keeping with the current system to use BoxedFloat and SmallFloat or BoxedDouble and SmallDouble.
This lifting things up provides us with an extremely malleable system. Pushing things down into the VM does the opposite.
I can understand. It is however a tradeoff between abstraction and malleability. I'd rather see everything done in Smalltalk. Yet primitives not exposing its innards.
I think I just can't have the cake and have it, too.
yes, alas it is all about tradeoffs :-). But that's great. The more varieties the merrier.
For example in RSqueakVM (aka SPy) there is no immediate
Integer whatsoever. Yes, tagged ints are read during image startup but
they
aren't subsequently represented as immediates or tagged ints after
that.
Well because it's implemented above RPython I guess it is using Python's bignum code directly. That's fine but its a bit of a cheat.
I was talking of SmallInts here. There is no bignum code involved. On the RPython side, SmallIntegers are just objects that have a field that is not accessible from Smalltalk but keeps a machine-word integer.
Hmmm, somewhere there's got to be a discrimination between SmallIntegers and other objects, right?
Just as input, in the Racket language and other Schemes,
the equivalent to our SmallInterger/LargeInteger is fixnum/bignum and for floats they have flonums and "extflonums" (80bit).
Best -Tobias
Hi Eliot
On 21.11.2014, at 19:56, Eliot Miranda eliot.miranda@gmail.com wrote:
Hi Tobias,
On Fri, Nov 21, 2014 at 10:52 AM, Tobias Pape Das.Linux@gmx.de wrote: Hi Eliot
On 21.11.2014, at 19:06, Eliot Miranda eliot.miranda@gmail.com wrote:
Hi Tobias,
On Fri, Nov 21, 2014 at 4:51 AM, Tobias Pape Das.Linux@gmx.de wrote:
Hi,
On 21.11.2014, at 13:44, Bert Freudenberg bert@freudenbergs.de wrote:
On 21.11.2014, at 13:29, J. Vuletich (mail lists) <
juanlists@jvuletich.org> wrote:
Quoting Bert Freudenberg bert@freudenbergs.de:
I'd suggest BoxedDouble and ImmediateDouble as names for the concrete
subclasses (*). Names do mean something. (**)
You're right about the FloatArray confusion. However, note that the
IEEE standard calls it single and double. It's only C using "float" to mean "single precision".
I'd name the abstract superclass Float, for readability, and the
isFloat test etc. Also: "Float pi" reads a lot nicer than anything else. I don't see the need for having a deep LimitedPrecisionReal - Float - BoxedDouble/ImmediateDouble deep hierarchy now.
If we ever add single-precision floats, we should name them
BoxedSingle and ImmediateSingle. At that point we might want a Single superclass and a LimitedPrecisionReal supersuperclass, but we can cross that bridge when we get there.
- Bert -
(*) Since we're not going to see the class names often, we could even
spell it out as BoxedDoublePrecisionFloat and ImmediateDoublePrecisionFloat. Only half joking. It would make the relation to the abstract Float very clear.
(**) We could also try to make the names googleable. I was surprised
to not get a good hit for "boxed immediate". Only "boxed unboxed" finds it. Maybe there are two better words?
I very much agree with Bert. But I'd suggest SmallDouble instead of
ImmediateDouble for consistency with SmallInteger.
Then it would have to be LargeDouble for consistency with LargeInteger,
too. Which I don't find compelling.
Also, with the 64 bit format we get many more immediate objects. There
already are immediate integers and characters, floats will be the third, there could be more, like immediate points. For those, the small/large distinction does not make sense.
Maybe Eliot's idea of keeping "Float" in the name was best, but instead
of "small" use "immediate":
Float - BoxedFloat - ImmediateFloat A Float is either a BoxedFloat or an ImmediateFloat, depending on
the magnitude of its exponent.
I don't like the idea of putting a VM/Storage detail into the Class name. The running system itself does not care about whether Floats or Integers are boxed or immediate.
I disagree. I think at least Smalltalk-80 has a philosophy of lifting as much out of the VM into the system, and hiding it from clients via encapsulation. So unlike many other VMs the compiler is in the system, the system explicitly separates SmallInteger, LargePositiveInteger and LargeNegativeInteger and implements large integer arithmetic with Smalltalk code that uses SmallIntegers. Note that the primitives are an optional extra optimization that the VM does not need to implement. So for me it is in keeping with the current system to use BoxedFloat and SmallFloat or BoxedDouble and SmallDouble.
This lifting things up provides us with an extremely malleable system. Pushing things down into the VM does the opposite.
I can understand. It is however a tradeoff between abstraction and malleability. I'd rather see everything done in Smalltalk. Yet primitives not exposing its innards.
I think I just can't have the cake and have it, too.
yes, alas it is all about tradeoffs :-). But that's great. The more varieties the merrier.
For example in RSqueakVM (aka SPy) there is no immediate
Integer whatsoever. Yes, tagged ints are read during image startup but they aren't subsequently represented as immediates or tagged ints after that.
Well because it's implemented above RPython I guess it is using Python's bignum code directly. That's fine but its a bit of a cheat.
I was talking of SmallInts here. There is no bignum code involved. On the RPython side, SmallIntegers are just objects that have a field that is not accessible from Smalltalk but keeps a machine-word integer.
Hmmm, somewhere there's got to be a discrimination between SmallIntegers and other objects, right?
It is done. They are just objects but object identity is delegated to value identity [1]. All primitives know about this and they unwrap and rewrap the boxed machine int. It's just a straight-forward boxed implementation of integers. Where have I confused you?
Just as input, in the Racket language and other Schemes, the equivalent to our SmallInterger/LargeInteger is fixnum/bignum and for floats they have flonums and "extflonums" (80bit).
Best -Tobias
Best -Tobias
[1]: https://bitbucket.org/pypy/lang-smalltalk/src/2a99a4827c84352bbd8344a327937c...
Hi,
On 21.11.2014, at 13:44, Bert Freudenberg bert@freudenbergs.de wrote:
On 21.11.2014, at 13:29, J. Vuletich (mail lists) juanlists@jvuletich.org wrote:
Quoting Bert Freudenberg bert@freudenbergs.de:
I'd suggest BoxedDouble and ImmediateDouble as names for the concrete subclasses (*). Names do mean something. (**)
You're right about the FloatArray confusion. However, note that the IEEE standard calls it single and double. It's only C using "float" to mean "single precision".
I'd name the abstract superclass Float, for readability, and the isFloat test etc. Also: "Float pi" reads a lot nicer than anything else. I don't see the need for having a deep LimitedPrecisionReal - Float - BoxedDouble/ImmediateDouble deep hierarchy now.
If we ever add single-precision floats, we should name them BoxedSingle and ImmediateSingle. At that point we might want a Single superclass and a LimitedPrecisionReal supersuperclass, but we can cross that bridge when we get there.
- Bert -
(*) Since we're not going to see the class names often, we could even spell it out as BoxedDoublePrecisionFloat and ImmediateDoublePrecisionFloat. Only half joking. It would make the relation to the abstract Float very clear.
(**) We could also try to make the names googleable. I was surprised to not get a good hit for "boxed immediate". Only "boxed unboxed" finds it. Maybe there are two better words?
I very much agree with Bert. But I'd suggest SmallDouble instead of ImmediateDouble for consistency with SmallInteger.
Then it would have to be LargeDouble for consistency with LargeInteger, too. Which I don't find compelling.
Also, with the 64 bit format we get many more immediate objects. There already are immediate integers and characters, floats will be the third, there could be more, like immediate points. For those, the small/large distinction does not make sense.
Maybe Eliot's idea of keeping "Float" in the name was best, but instead of "small" use "immediate":
Float - BoxedFloat - ImmediateFloat
A Float is either a BoxedFloat or an ImmediateFloat, depending on the magnitude of its exponent.
I don't like the idea of putting a VM/Storage detail into the Class name. The running system itself does not care about whether Floats or Integers are boxed or immediate. For example in RSqueakVM (aka SPy) there is no immediate Integer whatsoever. Yes, tagged ints are read during image startup but they aren't subsequently represented as immediates or tagged ints after that.
Just as input, in the Racket language and other Schemes, the equivalent to our SmallInterger/LargeInteger is fixnum/bignum and for floats they have flonums and "extflonums" (80bit).
Best -Tobias
On 21.11.2014, at 13:53, Tobias Pape Das.Linux@gmx.de wrote:
On 21.11.2014, at 13:44, Bert Freudenberg bert@freudenbergs.de wrote:
Also, with the 64 bit format we get many more immediate objects. There already are immediate integers and characters, floats will be the third, there could be more, like immediate points. For those, the small/large distinction does not make sense.
Maybe Eliot's idea of keeping "Float" in the name was best, but instead of "small" use "immediate":
Float - BoxedFloat - ImmediateFloat
A Float is either a BoxedFloat or an ImmediateFloat, depending on the magnitude of its exponent.
I don't like the idea of putting a VM/Storage detail into the Class name. The running system itself does not care about whether Floats or Integers are boxed or immediate.
Good point. Do you have a suggestion for names reflecting that?
- Bert -
On 21.11.2014, at 15:30, Bert Freudenberg bert@freudenbergs.de wrote:
On 21.11.2014, at 13:53, Tobias Pape Das.Linux@gmx.de wrote:
On 21.11.2014, at 13:44, Bert Freudenberg bert@freudenbergs.de wrote:
Also, with the 64 bit format we get many more immediate objects. There already are immediate integers and characters, floats will be the third, there could be more, like immediate points. For those, the small/large distinction does not make sense.
Maybe Eliot's idea of keeping "Float" in the name was best, but instead of "small" use "immediate":
Float - BoxedFloat - ImmediateFloat
A Float is either a BoxedFloat or an ImmediateFloat, depending on the magnitude of its exponent.
I don't like the idea of putting a VM/Storage detail into the Class name. The running system itself does not care about whether Floats or Integers are boxed or immediate.
Good point. Do you have a suggestion for names reflecting that?
First: I think it is possible to have both SmallInteger/Large*Integer as well as all Float stuff combined such that we only have - Integer - Float and the VM has to deal with internal stuff, ie representing small enough numbers tagged and larger ones as boxed (which could, for example, mean to not be able to access the boxed values from the image side…). However, this is “Zukunftsmusik” or “ungelegte Eier” (Things to come or not even considered).
Second: I think the small/large stuff is semantically correct, because that is what it is, whether immediate or not: - Integer: SmallInteger, LargeInteger - Float: SmallFloat, LargeFloat I don't think there's confusion about the single=float thing when you don't have the name double somewhere. Rationale against immediate in the name: Immediate/Non-Immediate is a means to an end, which is, speed for small or “few” things: ints, floats, chars. When you make something different immediate — just for fun: very short ascii strings like "hello" stored as 0x000068656C6C6F04 and 04 being the tag — you shouldn't name it ImmediateString but TinyString, because thats why it is there, an optimization for very tiny things.
HTH
Best -Tobias
Hi Tobias,
On Fri, Nov 21, 2014 at 8:01 AM, Tobias Pape Das.Linux@gmx.de wrote:
On 21.11.2014, at 15:30, Bert Freudenberg bert@freudenbergs.de wrote:
On 21.11.2014, at 13:53, Tobias Pape Das.Linux@gmx.de wrote:
On 21.11.2014, at 13:44, Bert Freudenberg bert@freudenbergs.de wrote:
Also, with the 64 bit format we get many more immediate objects. There
already are immediate integers and characters, floats will be the third, there could be more, like immediate points. For those, the small/large distinction does not make sense.
Maybe Eliot's idea of keeping "Float" in the name was best, but
instead of "small" use "immediate":
Float - BoxedFloat - ImmediateFloat A Float is either a BoxedFloat or an ImmediateFloat, depending on
the magnitude of its exponent.
I don't like the idea of putting a VM/Storage detail into the Class
name.
The running system itself does not care about whether Floats or
Integers are
boxed or immediate.
Good point. Do you have a suggestion for names reflecting that?
First: I think it is possible to have both SmallInteger/Large*Integer as well as all Float stuff combined such that we only have - Integer - Float and the VM has to deal with internal stuff, ie representing small enough numbers tagged and larger ones as boxed (which could, for example, mean to not be able to access the boxed values from the image side…). However, this is “Zukunftsmusik” or “ungelegte Eier” (Things to come or not even considered).
I don't find this compelling for reasons I've expressed earlier in the thread. Personally I think the VM shouldn't be in the business of hiding much. There are advantages to it hiding the machinery that connects contexts to stack frames and methods to machine code because that allows us to use the same system with very different VMs and that's hugely advantageous (see the Stack VM and SqueakJS for examples). But that doesn't for example hide contexts, it just optimizes teir use.
Second: I think the small/large stuff is semantically correct, because that
is what it is, whether immediate or not: - Integer: SmallInteger, LargeInteger - Float: SmallFloat, LargeFloat I don't think there's confusion about the single=float thing when you don't have the name double somewhere.
Agreed.
Rationale against immediate in the name: Immediate/Non-Immediate is a means to an end, which is, speed for small or “few” things: ints, floats, chars. When you make something different immediate — just for fun: very short ascii strings like "hello" stored as 0x000068656C6C6F04 and 04 being the tag — you shouldn't name it ImmediateString but TinyString, because thats why it is there, an optimization for very tiny things.
Agreed. But note that I will /not/ be pursuing things like immediate strings. IMO this is a bad idea. Whereas there are really compelling arguments for immediate integers, characters and floats, there aren't for strings or symbols. Most strings and most symbols are longer than 7 bytes
(ByteSymbol allInstances collect: [:ea| ea size]) sum asFloat / ByteSymbol allInstances size 17.905990063082676 (ByteString allInstances collect: [:ea| ea size]) sum asFloat / ByteString allInstances size 192.12565808504485
So choosing this representation doesn't save much space and loses time because the more complex mixed representation is involved in many operations (e.g. replaceFrom:to:with:startingAt: is now way more complex).
In fact, I'm thinking that a 2 bit tag is probably better. AFAIA, since I implemented 64-bit VisualWorks with a 3 bit tag no one has added any new immediate types. Points don't have the necessary dynamic frequency and indeed points with floats may be very common in newer UI architectures. Making nil, true and false immediates doesn't have much benefit either; they're unique values, and unique addresses work just as well as immediates. Essentially expanding the number of tagged types, and especially making the tagged type organization non-uniform (see e.g. Eliot Moss's VMs where nil, true, false have one organization, character has a another and SmallInteger another one still) makes the decode bloat, which slows down message send. So I think for the moment I'll go with a 2 bit tag, giving us an even larger range for SmallDouble and SmallInteger, and keep the simple representation:
immediates [62 bit value][2 bit tag] non-immediates [64 bit pointer (least 3 bits 0)] -> [8 bit slot count][2 gc bits][22 bit hash][3 gc bits][5 bit format][2 flag bits][22 bit class index]
On 21.11.2014, at 19:25, Eliot Miranda eliot.miranda@gmail.com wrote:
In fact, I'm thinking that a 2 bit tag is probably better. AFAIA, since I implemented 64-bit VisualWorks with a 3 bit tag no one has added any new immediate types. Points don't have the necessary dynamic frequency and indeed points with floats may be very common in newer UI architectures. Making nil, true and false immediates doesn't have much benefit either; they're unique values, and unique addresses work just as well as immediates. Essentially expanding the number of tagged types, and especially making the tagged type organization non-uniform (see e.g. Eliot Moss's VMs where nil, true, false have one organization, character has a another and SmallInteger another one still) makes the decode bloat, which slows down message send. So I think for the moment I'll go with a 2 bit tag, giving us an even larger range for SmallDouble and SmallInteger, and keep the simple representation:
immediates [62 bit value][2 bit tag] non-immediates [64 bit pointer (least 3 bits 0)] -> [8 bit slot count][2 gc bits][22 bit hash][3 gc bits][5 bit format][2 flag bits][22 bit class index]
I don't think that one additional bit will be helpful to either SmallInts or SmallDoubles. But having it can make for nice VM experiments. I'd reserve it.
- Bert -
On Fri, Nov 21, 2014 at 10:36 AM, Bert Freudenberg bert@freudenbergs.de wrote:
On 21.11.2014, at 19:25, Eliot Miranda eliot.miranda@gmail.com wrote:
In fact, I'm thinking that a 2 bit tag is probably better. AFAIA, since I implemented 64-bit VisualWorks with a 3 bit tag no one has added any new immediate types. Points don't have the necessary dynamic frequency and indeed points with floats may be very common in newer UI architectures. Making nil, true and false immediates doesn't have much benefit either; they're unique values, and unique addresses work just as well as immediates. Essentially expanding the number of tagged types, and especially making the tagged type organization non-uniform (see e.g. Eliot Moss's VMs where nil, true, false have one organization, character has a another and SmallInteger another one still) makes the decode bloat, which slows down message send. So I think for the moment I'll go with a 2 bit tag, giving us an even larger range for SmallDouble and SmallInteger, and keep the simple representation:
immediates [62 bit value][2 bit tag] non-immediates [64 bit pointer (least 3 bits 0)] -> [8 bit slot count][2 gc bits][22 bit hash][3 gc bits][5 bit format][2 flag bits][22 bit class index]
I don't think that one additional bit will be helpful to either SmallInts or SmallDoubles. But having it can make for nice VM experiments. I'd reserve it.
OK, less work too ;-)
Hi Eliot On 21.11.2014, at 19:25, Eliot Miranda eliot.miranda@gmail.com wrote:
On Fri, Nov 21, 2014 at 8:01 AM, Tobias Pape Das.Linux@gmx.de wrote:
On 21.11.2014, at 15:30, Bert Freudenberg bert@freudenbergs.de wrote:
On 21.11.2014, at 13:53, Tobias Pape Das.Linux@gmx.de wrote:
On 21.11.2014, at 13:44, Bert Freudenberg bert@freudenbergs.de wrote:
Also, with the 64 bit format we get many more immediate objects. There already are immediate integers and characters, floats will be the third, there could be more, like immediate points. For those, the small/large distinction does not make sense.
Maybe Eliot's idea of keeping "Float" in the name was best, but instead of "small" use "immediate":
Float - BoxedFloat - ImmediateFloat A Float is either a BoxedFloat or an ImmediateFloat, depending on the magnitude of its exponent.
I don't like the idea of putting a VM/Storage detail into the Class name. The running system itself does not care about whether Floats or Integers are boxed or immediate.
Good point. Do you have a suggestion for names reflecting that?
First: I think it is possible to have both SmallInteger/Large*Integer as well as all Float stuff combined such that we only have - Integer - Float and the VM has to deal with internal stuff, ie representing small enough numbers tagged and larger ones as boxed (which could, for example, mean to not be able to access the boxed values from the image side…). However, this is “Zukunftsmusik” or “ungelegte Eier” (Things to come or not even considered).
I don't find this compelling for reasons I've expressed earlier in the thread. Personally I think the VM shouldn't be in the business of hiding much. There are advantages to it hiding the machinery that connects contexts to stack frames and methods to machine code because that allows us to use the same system with very different VMs and that's hugely advantageous (see the Stack VM and SqueakJS for examples). But that doesn't for example hide contexts, it just optimizes teir use.
Probably it is just a matter of viewpoint whether this would be (not) hiding information or (not) leaking information. At this point in time, I start to question both my proposal and the current state…
Second: I think the small/large stuff is semantically correct, because that is what it is, whether immediate or not: - Integer: SmallInteger, LargeInteger - Float: SmallFloat, LargeFloat I don't think there's confusion about the single=float thing when you don't have the name double somewhere.
Agreed.
Rationale against immediate in the name: Immediate/Non-Immediate is a means to an end, which is, speed for small or “few” things: ints, floats, chars. When you make something different immediate — just for fun: very short ascii strings like "hello" stored as 0x000068656C6C6F04 and 04 being the tag — you shouldn't name it ImmediateString but TinyString, because thats why it is there, an optimization for very tiny things.
Agreed. But note that I will /not/ be pursuing things like immediate strings. IMO this is a bad idea. Whereas there are really compelling arguments for immediate integers, characters and floats, there aren't for strings or symbols.
I did not intend to propose immediate string but merely used them as an example, nothing more.
Most strings and most symbols are longer than 7 bytes
(ByteSymbol allInstances collect: [:ea| ea size]) sum asFloat / ByteSymbol allInstances size 17.905990063082676 (ByteString allInstances collect: [:ea| ea size]) sum asFloat / ByteString allInstances size 192.12565808504485
So choosing this representation doesn't save much space and loses time because the more complex mixed representation is involved in many operations (e.g. replaceFrom:to:with:startingAt: is now way more complex).
In fact, I'm thinking that a 2 bit tag is probably better. AFAIA, since I implemented 64-bit VisualWorks with a 3 bit tag no one has added any new immediate types. Points don't have the necessary dynamic frequency and indeed points with floats may be very common in newer UI architectures. Making nil, true and false immediates doesn't have much benefit either; they're unique values, and unique addresses work just as well as immediates. Essentially expanding the number of tagged types, and especially making the tagged type organization non-uniform (see e.g. Eliot Moss's VMs where nil, true, false have one organization, character has a another and SmallInteger another one still) makes the decode bloat, which slows down message send. So I think for the moment I'll go with a 2 bit tag, giving us an even larger range for SmallDouble and SmallInteger, and keep the simple representation:
immediates [62 bit value][2 bit tag] non-immediates [64 bit pointer (least 3 bits 0)] -> [8 bit slot count][2 gc bits][22 bit hash][3 gc bits][5 bit format][2 flag bits][22 bit class index] -- best, Eliot
Best -Tobias
Quoting Bert Freudenberg bert@freudenbergs.de:
On 21.11.2014, at 13:29, J. Vuletich (mail lists) juanlists@jvuletich.org wrote:
Quoting Bert Freudenberg bert@freudenbergs.de:
I'd suggest BoxedDouble and ImmediateDouble as names for the concrete subclasses (*). Names do mean something. (**)
You're right about the FloatArray confusion. However, note that the IEEE standard calls it single and double. It's only C using "float" to mean "single precision".
I'd name the abstract superclass Float, for readability, and the isFloat test etc. Also: "Float pi" reads a lot nicer than anything else. I don't see the need for having a deep LimitedPrecisionReal
- Float - BoxedDouble/ImmediateDouble deep hierarchy now.
If we ever add single-precision floats, we should name them BoxedSingle and ImmediateSingle. At that point we might want a Single superclass and a LimitedPrecisionReal supersuperclass, but we can cross that bridge when we get there.
- Bert -
(*) Since we're not going to see the class names often, we could even spell it out as BoxedDoublePrecisionFloat and ImmediateDoublePrecisionFloat. Only half joking. It would make the relation to the abstract Float very clear.
(**) We could also try to make the names googleable. I was surprised to not get a good hit for "boxed immediate". Only "boxed unboxed" finds it. Maybe there are two better words?
I very much agree with Bert. But I'd suggest SmallDouble instead of ImmediateDouble for consistency with SmallInteger.
Then it would have to be LargeDouble for consistency with LargeInteger, too. Which I don't find compelling.
Please no. 'Large' in LargeInteger means unlimited or at least extended range. These won't be 'extended' doubles (like, for example, C 'long double'). They would be plain standard ieee Double. A LargeDouble could perhaps be an arbitrary precision Double or such, some day.
Also, with the 64 bit format we get many more immediate objects. There already are immediate integers and characters, floats will be the third, there could be more, like immediate points. For those, the small/large distinction does not make sense.
That's a point, sure. But the parallels between SmallInteger and SmallDouble should be explicit.
Maybe Eliot's idea of keeping "Float" in the name was best, but instead of "small" use "immediate":
Float - BoxedFloat - ImmediateFloat
A Float is either a BoxedFloat or an ImmediateFloat, depending on the magnitude of its exponent.
- Bert -
Again, please no. Float means 32 bit single precision for too many people out there. It means that in our own FloatArrays. Doubles are Doubles.
To me the best option is SmallDouble and BoxedDouble or simply Double.
Cheers, Juan Vuletich
To be abstract, or to be concrete, that is the question.
Coming back to Eliot's proposal:
modify class Float to be an abstract class, and add two subclasses, BoxedFloat and SmallFloat, such that existing boxed instances of Float outside the SmallFloat range will become instances of BoxedFloat and instances within that range will be replaced by references to the relevant SmallFloat. [...] An alternative [...] is to add a superclass, e.g. LimitedPrecisionReal, move most of the methods into it, and keep Float as Float, and add SmallFloat as a subclass of LimitedPrecisionReal.
Float | +------- BoxedFloat | +------- SmallFloat
LimitedPrecisionReal | +------- Float | +------- SmallFloat
The actual question was if the class named "Float" (as used in expressions like "Float pi") should be concrete or abstract.
I strongly agree with Eliot's assessment that making Float the abstract superclass is best. What we name the two concrete subclasses is bikeshedding, and I trust Eliot to pick something not too unreasonable.
- Bert -
On Fri, Nov 21, 2014 at 5:30 AM, Bert Freudenberg bert@freudenbergs.de wrote:
To be abstract, or to be concrete, that is the question.
Coming back to Eliot's proposal:
modify class Float to be an abstract class, and add two subclasses,
BoxedFloat and SmallFloat, such that existing boxed instances of Float outside the SmallFloat range will become instances of BoxedFloat and instances within that range will be replaced by references to the relevant SmallFloat.
[...] An alternative [...] is to add a superclass, e.g. LimitedPrecisionReal,
move most of the methods into it, and keep Float as Float, and add SmallFloat as a subclass of LimitedPrecisionReal.
Float | +------- BoxedFloat | +------- SmallFloat
LimitedPrecisionReal | +------- Float | +------- SmallFloat
The actual question was if the class named "Float" (as used in expressions like "Float pi") should be concrete or abstract.
I strongly agree with Eliot's assessment that making Float the abstract superclass is best. What we name the two concrete subclasses is bikeshedding, and I trust Eliot to pick something not too unreasonable.
Good. I think I'll go with
Float | +------- BoxedDouble | +------- SmallDouble
ImmediateDouble is fine too, but I like the symmetry with SmallInteger.
On Fri, Nov 21, 2014 at 02:30:59PM +0100, Bert Freudenberg wrote:
To be abstract, or to be concrete, that is the question.
Coming back to Eliot's proposal:
modify class Float to be an abstract class, and add two subclasses, BoxedFloat and SmallFloat, such that existing boxed instances of Float outside the SmallFloat range will become instances of BoxedFloat and instances within that range will be replaced by references to the relevant SmallFloat. [...] An alternative [...] is to add a superclass, e.g. LimitedPrecisionReal, move most of the methods into it, and keep Float as Float, and add SmallFloat as a subclass of LimitedPrecisionReal.
Float | +------- BoxedFloat | +------- SmallFloat
LimitedPrecisionReal | +------- Float | +------- SmallFloat
The actual question was if the class named "Float" (as used in expressions like "Float pi") should be concrete or abstract.
I strongly agree with Eliot's assessment that making Float the abstract superclass is best. What we name the two concrete subclasses is bikeshedding, and I trust Eliot to pick something not too unreasonable.
I also agree. The name "Float" suggests the concept of floating point arithmetic. There are many different ways to implement that concept (*). But for all of the possible concrete implementations of floating point numbers, the name "Float" makes sense in the abstract.
In Squeak, all instances of "Float" (in the abstract sense) are currently implemented as 64-bit doubles (instances of class Float) or 32-bit singles (hidden within FloatArray). Spur-64 will provide an immediate implementation. Maybe somebody will come up with a class to represent the 32-bit floating point values in a FloatArray. And maybe someone else will come up with a 128 bit floating point represention, or something else entirely. But in any case, it seems natural to have an abstract "Float" to represent all of the concrete implementations that may prove necessary or useful over time.
So +1 for making Float be the abstract superclass.
Dave
(*) As a former field service engineer for Harris Computer Systems, I still consider the 48-bit floating point format of the H800 series to be superior to the awkward compromises of 32-bit and 64-bit floating point representations ;-) See pages 2-2 and 6-1 of the manual for descriptions of the floating point data formats (I think I have a paper copy of this moldering away in my basement).
http://bitsavers.informatik.uni-stuttgart.de/pdf/harris/0830007-000_Series_8...
On 24.11.2014, at 05:09, David T. Lewis lewis@mail.msen.com wrote:
(*) As a former field service engineer for Harris Computer Systems, I still consider the 48-bit floating point format of the H800 series to be superior to the awkward compromises of 32-bit and 64-bit floating point representations ;-) See pages 2-2 and 6-1 of the manual for descriptions of the floating point data formats (I think I have a paper copy of this moldering away in my basement).
http://bitsavers.informatik.uni-stuttgart.de/pdf/harris/0830007-000_Series_8...
Oh, I got excited for a moment there, thinking that maybe this could be the origin of Smalltalk-78's weird 48 bit floating point format. But it's completely different. I had to reverse-engineer it because Dan could not remember (only later we got a printout of the VM's 8086 assembly source code). It's optimized for a software implementation with the mantissa on a 16-bit word boundary. Not sure why the exponent's sign bit is in the LSB though. But 16 bits of exponent, can you imagine the range? Luckily there were no insanely large instances in the snapshot. They get converted to modern floats when parsing the original object space dump:
wordsAsFloat: function() { // layout of NoteTaker Floats (from MSB): // 15 bits exponent in two's complement without bias, 1 bit sign // 32 bits mantissa including its highest bit (which is implicit in IEEE 754) if (this.words[1] == 0) return 0.0; // if high-bit of mantissa is 0, then it's all zero var nt0 = this.words[0], nt1 = this.words[1], nt2 = this.words[2], ntExponent = nt0 >> 1, ntSign = nt0 & 1, ntMantissa = (nt1 & 0x7FFF) << 16 | nt2, // drop high bit of mantissa ieeeExponent = (ntExponent + 1022) & 0x7FF, // IEEE: 11 bit exponent, biased ieee = new DataView(new ArrayBuffer(8)); // IEEE is 1 sign bit, 11 bits exponent, 53 bits mantissa omitting the highest bit (which is always 1, except for 0.0) ieee.setInt32(0, ntSign << 31 | ieeeExponent << (31-11) | ntMantissa >> 11); // 20 bits of ntMantissa ieee.setInt32(4, ntMantissa << (32-11)); // remaining 11 bits of ntMantissa, rest filled up with 0 // why not use setInt64()? Because JavaScript does not have 64 bit ints return ieee.getFloat64(0); }
- Bert -
On Mon, Nov 24, 2014 at 11:51:06AM +0100, Bert Freudenberg wrote:
On 24.11.2014, at 05:09, David T. Lewis lewis@mail.msen.com wrote:
(*) As a former field service engineer for Harris Computer Systems, I
still
consider the 48-bit floating point format of the H800 series to be
superior to
the awkward compromises of 32-bit and 64-bit floating point
representations ;-)
See pages 2-2 and 6-1 of the manual for descriptions of the floating
point data
formats (I think I have a paper copy of this moldering away in my
basement).
http://bitsavers.informatik.uni-stuttgart.de/pdf/harris/0830007-000_Series_8...
Oh, I got excited for a moment there, thinking that maybe this could be the origin of Smalltalk-78's weird 48 bit floating point format. But it's completely different.
It's just a coincidence I'm sure. A 48 bit float makes a lot of sense. Minicomputers were typically 16 bit machines, but the H800 was a 24 bit machine with 48 and 96 bit floating point data types and 24 bit registers. 24 bits was a lot of address space, and a 48 bit float was far superior to a 32 bits for scientific and numeric computing. In those days, "Super Minicomputer" was a market category, and the H800 was marketed that way, targeting applications such as finite element analysis.
I would not be surprised if 16 bit Smalltalk systems arrived at similar conclusions for general purpose floating point representation. 32 bits would have been too small, and 64 bits too big. 48 bits was just about the right size to be useful for serious work on a small machine.
Very large mantissa ranges also make sense for interative numeric work, where I expect that they would reduce the need to keep track of numeric overflow in some kinds of calculations (just guessing, but I'm sure that was the reason).
I had to reverse-engineer it because Dan could not remember (only later we got a printout of the VM's 8086 assembly source code). It's optimized for a software implementation with the mantissa on a 16-bit word boundary. Not sure why the exponent's sign bit is in the LSB though. But 16 bits of exponent, can you imagine the range? Luckily there were no insanely large instances in the snapshot. They get converted to modern floats when parsing the original object space dump:
wordsAsFloat: function() { // layout of NoteTaker Floats (from MSB): // 15 bits exponent in two's complement without bias, 1 bit sign // 32 bits mantissa including its highest bit (which is implicit
in IEEE 754)
if (this.words[1] == 0) return 0.0; // if high-bit of mantissa
is 0, then it's all zero
var nt0 = this.words[0], nt1 = this.words[1], nt2 = this.words[2], ntExponent = nt0 >> 1, ntSign = nt0 & 1, ntMantissa = (nt1 &
0x7FFF) << 16 | nt2, // drop high bit of mantissa
ieeeExponent = (ntExponent + 1022) & 0x7FF, // IEEE: 11 bit
exponent, biased
ieee = new DataView(new ArrayBuffer(8)); // IEEE is 1 sign bit, 11 bits exponent, 53 bits mantissa
omitting the highest bit (which is always 1, except for 0.0)
ieee.setInt32(0, ntSign << 31 | ieeeExponent << (31-11) |
ntMantissa >> 11); // 20 bits of ntMantissa
ieee.setInt32(4, ntMantissa << (32-11)); // remaining 11 bits of
ntMantissa, rest filled up with 0
// why not use setInt64()? Because JavaScript does not have 64
bit ints
return ieee.getFloat64(0); }
Cool! I had no idea that you were dialing your wayback machine this far back in time. Very impressive indeed.
Dave
Bert Freudenberg wrote:
On 21.11.2014, at 04:19, David T. Lewis lewis@mail.msen.com wrote:
On Thu, Nov 20, 2014 at 05:51:42PM -0800, Eliot Miranda wrote:
Hi All,
64-bit Spur can usefully provide an immediate float, a 61-bit subset of the ieee double precision float. The scheme steals bits from the mantissa to use for the immediate's 3-bit tag pattern. So values have the same precision as ieee doubles, but can only represent the subset with exponents between 10^-38 and 10^38, the single-precision range. The issue here is how to organize the class hierarchy.
The approach that looks best to me is to modify class Float to be an abstract class, and add two subclasses, BoxedFloat and SmallFloat, such that existing boxed instances of Float outside the SmallFloat range will become instances of BoxedFloat and instances within that range will be replaced by references to the relevant SmallFloat.
With this approach ...
- Float pi etc can still be used, even though they will answer instances of
SmallFloat. But tests such as "self assert: result class == Float." will need to be rewritten to e.g. "self assert: result isFloat".
- BoxedFloat and SmallFloat will not be mentioned much at all since floats
print themselves literally, and so the fact that the classes have changed won't be obvious.
- the boxed Float primitives (receiver is a boxed float) live in BoxedFloat
and the immediate ones live in SmallFloat. Making SmallFloat a subclass of Float poses problems for all the primitives that do a super send to retry, since the boxed Float prims will be above the unboxed ones and so the boxed ones would have to test for an immediate receiver.
An alternative, that VW took (because it has both Float and Double) is to add a superclass, e.g. LimitedPrecisionReal, move most of the methods into it, and keep Float as Float, and add SmallFloat as a subclass of LimitedPrecisionReal. Then while class-side methods such as pi would likely be implemented in LimitedPrecisionReal class, sends to Float to access them find them via inheritance. An automatic reorganization which moves only primitives out of LimitedPrecisionReal is easy to write.
Thoughts?
I have always felt that the mapping of Float to 64-bit double and FloatArray to 32-bit float is awkward. It may be that 32-bit floats are becoming less relevant nowadays, but if short float values are still important, then it would be nice to be able to represent them directly. I like the idea of having a Float class and a Double class to represent the two most common representations. A class hierarchy that could potentially support this sounds like a good idea to me.
I have no experience with VW, but a LimitedPrecisionReal hierachy sounds like a reasonable approach.
Dave
I'd suggest BoxedDouble and ImmediateDouble as names for the concrete subclasses (*). Names do mean something. (**)
This is a nice idea, except we have the legacy of SmallInteger and LargeInteger, and I don't like the inconsistency of Float not following the same rule. The boxing/unboxing can be covered in the class comment. Unless you want to change to BoxedInteger and ImmediateInteger ?
cheers -ben
You're right about the FloatArray confusion. However, note that the IEEE standard calls it single and double. It's only C using "float" to mean "single precision".
I'd name the abstract superclass Float, for readability, and the isFloat test etc. Also: "Float pi" reads a lot nicer than anything else. I don't see the need for having a deep LimitedPrecisionReal - Float - BoxedDouble/ImmediateDouble deep hierarchy now.
If we ever add single-precision floats, we should name them BoxedSingle and ImmediateSingle. At that point we might want a Single superclass and a LimitedPrecisionReal supersuperclass, but we can cross that bridge when we get there.
- Bert -
(*) Since we're not going to see the class names often, we could even spell it out as BoxedDoublePrecisionFloat and ImmediateDoublePrecisionFloat. Only half joking. It would make the relation to the abstract Float very clear.
(**) We could also try to make the names googleable. I was surprised to not get a good hit for "boxed immediate". Only "boxed unboxed" finds it. Maybe there are two better words?
I have always felt that the mapping of Float to 64-bit double and FloatArray to 32-bit float is awkward. It may be that 32-bit floats are becoming less relevant nowadays, but if short float values are still important, then it
From the sense that a lot of computing is addressing fuzzy pattern
matching, 32-bit speed and space are actually becoming more relevant.
would be nice to be able to represent them directly. I like the idea of having a Float class and a Double class to represent the two most common representations. A class hierarchy that could potentially support this sounds like a good idea to me.
I have no experience with VW, but a LimitedPrecisionReal hierachy sounds like a reasonable approach.
Dave
On 21.11.2014, at 02:51, Eliot Miranda eliot.miranda@gmail.com wrote:
Hi All,
64-bit Spur can usefully provide an immediate float, a 61-bit subset of the ieee double precision float. The scheme steals bits from the mantissa to use for the immediate's 3-bit tag pattern. So values have the same precision as ieee doubles, but can only represent the subset with exponents between 10^-38 and 10^38, the single-precision range.
This is worded confusingly. It sounds like the mantissa has 3 bits less, which would make it less precise.
Here is how I understood it: The mantissa is stored with its full 52 bits of precision (*). But only the lower 8 bits of the 11-bit exponent are stored. If the upper 3 bits of the exponent are needed, then a boxed float is created.
I guess I know what you meant, that it is the 3 lowest significant bits in an oop which are used for tagging immediate objects, and in an IEEE double that is part of the mantissa. But these 3 bits are not lost, but moved elsewhere (namely where the 3 highest significant bits of the exponent used to be stored).
Did I understand correctly? You haven't pushed the code yet so I couldn't verify.
- Bert -
(*) http://en.wikipedia.org/wiki/Double-precision_floating-point_format
On Fri, Nov 21, 2014 at 4:47 AM, Bert Freudenberg bert@freudenbergs.de wrote:
On 21.11.2014, at 02:51, Eliot Miranda eliot.miranda@gmail.com wrote:
Hi All,
64-bit Spur can usefully provide an immediate float, a 61-bit subset
of the ieee double precision float. The scheme steals bits from the mantissa to use for the immediate's 3-bit tag pattern. So values have the same precision as ieee doubles, but can only represent the subset with exponents between 10^-38 and 10^38, the single-precision range.
This is worded confusingly. It sounds like the mantissa has 3 bits less, which would make it less precise.
It's not worded confusingly, it's just plain wrong :-/. Let me try again...
64-bit Spur can usefully provide an immediate float, a 61-bit subset of the ieee double precision float. The scheme steals 3 bits from the exponent to use for the immediate's 3-bit tag pattern. So values have the same precision as ieee doubles, but can only represent the subset with exponents between 10^-38 and 10^38, the single-precision range.
Here's the representation:
[8 bit exponent][52 bit mantissa][1 bit sign][3 bit tag]
This has the advantage that +/- zero are the only immediate float values that are less than or equal to fifteen. So to convert to a float:
- shift away tags [000][8 bit exponent][52 bit mantissa][1 bit sign]
- if > 1 (i.e. non-zero) add exponent offset: [11 bit exponent][52 bit mantissa][sign bit]
- rotate by -1 [sign bit][11 bit exponent][52 bit mantissa]
And to encode:
- rotate by 1 [11 bit exponent][52 bit mantissa][sign bit]
- if > 1 subtract exponent offset fail if <= 0 test against max value, fail if too big
- shift by 3 and add tag bits: [8 bit exponent][52 bit mantissa][1 bit sign][3 bit tag]
Here is how I understood it: The mantissa is stored with its full 52 bits
of precision (*). But only the lower 8 bits of the 11-bit exponent are stored. If the upper 3 bits of the exponent are needed, then a boxed float is created.
I guess I know what you meant, that it is the 3 lowest significant bits in
an oop which are used for tagging immediate objects, and in an IEEE double that is part of the mantissa. But these 3 bits are not lost, but moved elsewhere (namely where the 3 highest significant bits of the exponent used to be stored).
Did I understand correctly? You haven't pushed the code yet so I couldn't verify.
Yes, of course :-)
- Bert -
(*) http://en.wikipedia.org/wiki/Double-precision_floating-point_format
squeak-dev@lists.squeakfoundation.org