One of the problem I foresee is this one:
1.0e17 >= 100000000000000001
indeed, the (generated) primitive currently convert the SmallInteger to double, and this one is true:
1.0e17 >= 100000000000000001 asFloat
In 32 bit Spur/COG with 30 bit max magnitude, it was OK because every conversion SmallInteger -> Double was exact - like all integer in the range [-2^53,2^53] - see Float class>>maxExactInteger
In 64 bits Spur, this might not be the case anymore since some SmallInteger exceed this range...
So the (generated) primitive must be protected with some range test.
Hi Nicolas,
On Fri, Dec 26, 2014 at 7:48 AM, Nicolas Cellier < nicolas.cellier.aka.nice@gmail.com> wrote:
One of the problem I foresee is this one:
1.0e17 >= 100000000000000001
indeed, the (generated) primitive currently convert the SmallInteger to double, and this one is true:
1.0e17 >= 100000000000000001 asFloat
In 32 bit Spur/COG with 30 bit max magnitude, it was OK because every conversion SmallInteger -> Double was exact - like all integer in the range [-2^53,2^53] - see Float class>>maxExactInteger
In 64 bits Spur, this might not be the case anymore since some SmallInteger exceed this range...
Thanks for the head's up. You're quite right:
(1 << 60 - 1) asFloat asInteger = (1 << 60 - 1) false
So the (generated) primitive must be protected with some range test.
So in cases where we do double CMP integer or integer CMP double the primitive must fail if the SmallInteger has more than 52 bits of significance? That's reasonable. I need to review the StackInterpreter code to make sure it does the same. I'm pretty sure it doesn't do so right now.
2014-12-26 19:44 GMT+01:00 Eliot Miranda eliot.miranda@gmail.com:
Hi Nicolas,
On Fri, Dec 26, 2014 at 7:48 AM, Nicolas Cellier < nicolas.cellier.aka.nice@gmail.com> wrote:
One of the problem I foresee is this one:
1.0e17 >= 100000000000000001
indeed, the (generated) primitive currently convert the SmallInteger to double, and this one is true:
1.0e17 >= 100000000000000001 asFloat
In 32 bit Spur/COG with 30 bit max magnitude, it was OK because every conversion SmallInteger -> Double was exact - like all integer in the range [-2^53,2^53] - see Float class>>maxExactInteger
In 64 bits Spur, this might not be the case anymore since some SmallInteger exceed this range...
Thanks for the head's up. You're quite right:
(1 << 60 - 1) asFloat asInteger = (1 << 60 - 1) false
So the (generated) primitive must be protected with some range test.
So in cases where we do double CMP integer or integer CMP double the primitive must fail if the SmallInteger has more than 52 bits of significance?
Well, up to 53 bits it's OK to convert an integer to a double, it's lossless.
That's reasonable. I need to review the StackInterpreter code to make sure it does the same. I'm pretty sure it doesn't do so right now.
#primitiveLessThan does only compare integers, but #bytecodePrimLessThan tries to be smarter and invoke #primitiveFloatLess:thanArg: then #loadFloatOrIntFrom:
-- best, Eliot
2014-12-26 22:46 GMT+01:00 Nicolas Cellier < nicolas.cellier.aka.nice@gmail.com>:
2014-12-26 19:44 GMT+01:00 Eliot Miranda eliot.miranda@gmail.com:
Hi Nicolas,
On Fri, Dec 26, 2014 at 7:48 AM, Nicolas Cellier < nicolas.cellier.aka.nice@gmail.com> wrote:
One of the problem I foresee is this one:
1.0e17 >= 100000000000000001
indeed, the (generated) primitive currently convert the SmallInteger to double, and this one is true:
1.0e17 >= 100000000000000001 asFloat
In 32 bit Spur/COG with 30 bit max magnitude, it was OK because every conversion SmallInteger -> Double was exact - like all integer in the range [-2^53,2^53] - see Float class>>maxExactInteger
In 64 bits Spur, this might not be the case anymore since some SmallInteger exceed this range...
Thanks for the head's up. You're quite right:
(1 << 60 - 1) asFloat asInteger = (1 << 60 - 1) false
So the (generated) primitive must be protected with some range test.
So in cases where we do double CMP integer or integer CMP double the primitive must fail if the SmallInteger has more than 52 bits of significance?
Well, up to 53 bits it's OK to convert an integer to a double, it's lossless.
That's reasonable. I need to review the StackInterpreter code to make sure it does the same. I'm pretty sure it doesn't do so right now.
#primitiveLessThan does only compare integers, but #bytecodePrimLessThan tries to be smarter and invoke #primitiveFloatLess:thanArg: then #loadFloatOrIntFrom:
So defining a specialized loadFloatOrInt53From: for float/integer compare primitive/bytecode may do the job...
-- best, Eliot
I question would be how long would it take to crowd source a Sunit that covers the entire range of 2^64? When we worked on the 32bit clean image I ended up doing the 2^32 range for small integers and large integers after discovering at issue at 2^24
Sent from my iPhone
On Dec 26, 2014, at 1:58 PM, Nicolas Cellier < nicolas.cellier.aka.nice@gmail.com> wrote:
2014-12-26 22:46 GMT+01:00 Nicolas Cellier < nicolas.cellier.aka.nice@gmail.com>:
2014-12-26 19:44 GMT+01:00 Eliot Miranda eliot.miranda@gmail.com:
Hi Nicolas,
On Fri, Dec 26, 2014 at 7:48 AM, Nicolas Cellier < nicolas.cellier.aka.nice@gmail.com> wrote:
One of the problem I foresee is this one:
1.0e17 >= 100000000000000001
indeed, the (generated) primitive currently convert the SmallInteger to double, and this one is true:
1.0e17 >= 100000000000000001 asFloat
In 32 bit Spur/COG with 30 bit max magnitude, it was OK because every conversion SmallInteger -> Double was exact - like all integer in the range [-2^53,2^53] - see Float class>>maxExactInteger
In 64 bits Spur, this might not be the case anymore since some SmallInteger exceed this range...
Thanks for the head's up. You're quite right:
(1 << 60 - 1) asFloat asInteger = (1 << 60 - 1) false
So the (generated) primitive must be protected with some range test.
So in cases where we do double CMP integer or integer CMP double the primitive must fail if the SmallInteger has more than 52 bits of significance?
Well, up to 53 bits it's OK to convert an integer to a double, it's lossless.
That's reasonable. I need to review the StackInterpreter code to make sure it does the same. I'm pretty sure it doesn't do so right now.
#primitiveLessThan does only compare integers, but #bytecodePrimLessThan tries to be smarter and invoke #primitiveFloatLess:thanArg: then #loadFloatOrIntFrom:
So defining a specialized loadFloatOrInt53From: for float/integer compare primitive/bytecode may do the job...
-- best, Eliot
2014-12-26 23:07 GMT+01:00 John McIntosh johnmci@smalltalkconsulting.com:
I question would be how long would it take to crowd source a Sunit that covers the entire range of 2^64? When we worked on the 32bit clean image I ended up doing the 2^32 range for small integers and large integers after discovering at issue at 2^24
It depends, if testing the 2^32 combinations took you 1hour, you might get around the 2^64 in less than 500,000 years... Unless of course you can parallelize on the 1,5 billion active PC in the world (can a good virus do that?)
If you deal with 2^32 tests in 1 second, then less than 150 years will be enough with a single machine (I advise you to wait a few more years before launching the test, that might be the best strategy...)
Sent from my iPhone
On Dec 26, 2014, at 1:58 PM, Nicolas Cellier < nicolas.cellier.aka.nice@gmail.com> wrote:
2014-12-26 22:46 GMT+01:00 Nicolas Cellier < nicolas.cellier.aka.nice@gmail.com>:
2014-12-26 19:44 GMT+01:00 Eliot Miranda eliot.miranda@gmail.com:
Hi Nicolas,
On Fri, Dec 26, 2014 at 7:48 AM, Nicolas Cellier < nicolas.cellier.aka.nice@gmail.com> wrote:
One of the problem I foresee is this one:
1.0e17 >= 100000000000000001
indeed, the (generated) primitive currently convert the SmallInteger to double, and this one is true:
1.0e17 >= 100000000000000001 asFloat
In 32 bit Spur/COG with 30 bit max magnitude, it was OK because every conversion SmallInteger -> Double was exact - like all integer in the range [-2^53,2^53] - see Float class>>maxExactInteger
In 64 bits Spur, this might not be the case anymore since some SmallInteger exceed this range...
Thanks for the head's up. You're quite right:
(1 << 60 - 1) asFloat asInteger = (1 << 60 - 1) false
So the (generated) primitive must be protected with some range test.
So in cases where we do double CMP integer or integer CMP double the primitive must fail if the SmallInteger has more than 52 bits of significance?
Well, up to 53 bits it's OK to convert an integer to a double, it's lossless.
That's reasonable. I need to review the StackInterpreter code to make sure it does the same. I'm pretty sure it doesn't do so right now.
#primitiveLessThan does only compare integers, but #bytecodePrimLessThan tries to be smarter and invoke #primitiveFloatLess:thanArg: then #loadFloatOrIntFrom:
So defining a specialized loadFloatOrInt53From: for float/integer compare primitive/bytecode may do the job...
-- best, Eliot
On Fri, Dec 26, 2014 at 2:45 PM, Nicolas Cellier < nicolas.cellier.aka.nice@gmail.com> wrote:
2014-12-26 23:07 GMT+01:00 John McIntosh johnmci@smalltalkconsulting.com :
I question would be how long would it take to crowd source a Sunit that covers the entire range of 2^64? When we worked on the 32bit clean image I ended up doing the 2^32 range for small integers and large integers after discovering at issue at 2^24
It depends, if testing the 2^32 combinations took you 1hour, you might get around the 2^64 in less than 500,000 years... Unless of course you can parallelize on the 1,5 billion active PC in the world (can a good virus do that?)
If you deal with 2^32 tests in 1 second, then less than 150 years will be enough with a single machine (I advise you to wait a few more years before launching the test, that might be the best strategy...)
Remember we only have to test 2^61 values ;-). Clearly the exhaustive test is not going to work, John. Time to think of edge cases ;-)
Sent from my iPhone
On Dec 26, 2014, at 1:58 PM, Nicolas Cellier < nicolas.cellier.aka.nice@gmail.com> wrote:
2014-12-26 22:46 GMT+01:00 Nicolas Cellier < nicolas.cellier.aka.nice@gmail.com>:
2014-12-26 19:44 GMT+01:00 Eliot Miranda eliot.miranda@gmail.com:
Hi Nicolas,
On Fri, Dec 26, 2014 at 7:48 AM, Nicolas Cellier < nicolas.cellier.aka.nice@gmail.com> wrote:
One of the problem I foresee is this one:
1.0e17 >= 100000000000000001
indeed, the (generated) primitive currently convert the SmallInteger to double, and this one is true:
1.0e17 >= 100000000000000001 asFloat
In 32 bit Spur/COG with 30 bit max magnitude, it was OK because every conversion SmallInteger -> Double was exact - like all integer in the range [-2^53,2^53] - see Float class>>maxExactInteger
In 64 bits Spur, this might not be the case anymore since some SmallInteger exceed this range...
Thanks for the head's up. You're quite right:
(1 << 60 - 1) asFloat asInteger = (1 << 60 - 1) false
So the (generated) primitive must be protected with some range test.
So in cases where we do double CMP integer or integer CMP double the primitive must fail if the SmallInteger has more than 52 bits of significance?
Well, up to 53 bits it's OK to convert an integer to a double, it's lossless.
That's reasonable. I need to review the StackInterpreter code to make sure it does the same. I'm pretty sure it doesn't do so right now.
#primitiveLessThan does only compare integers, but #bytecodePrimLessThan tries to be smarter and invoke #primitiveFloatLess:thanArg: then #loadFloatOrIntFrom:
So defining a specialized loadFloatOrInt53From: for float/integer compare primitive/bytecode may do the job...
-- best, Eliot
So in cases where we do double CMP integer or integer CMP double the primitive must fail if the SmallInteger has more than 52 bits of significance?
Well, up to 53 bits it's OK to convert an integer to a double, it's lossless.
How about making SmallIntegers be 53 bits? Could the VM be as efficient if it masked off more bits?
IMHO the additional 8 bits we get by going to 61 bits will not significantly enhance performance. In fact, the payoff is probably rather small beyond 32 bits (but still useful e.g. for bit shifts beyond the 32 bit boundary).
The question is why only some operations should fail if the SmallInt has more than 53 bits, or if we should consistently switch to LargeInts above that. OTOH wasting 8 bits in every SmallInt oop may not be a good tradeoff.
- Bert -
Hi Bert,
On Dec 29, 2014, at 4:45 AM, Bert Freudenberg bert@freudenbergs.de wrote:
So in cases where we do double CMP integer or integer CMP double the primitive must fail if the SmallInteger has more than 52 bits of significance?
Well, up to 53 bits it's OK to convert an integer to a double, it's lossless.
How about making SmallIntegers be 53 bits? Could the VM be as efficient if it masked off more bits?
IMHO the additional 8 bits we get by going to 61 bits will not significantly enhance performance. In fact, the payoff is probably rather small beyond 32 bits (but still useful e.g. for bit shifts beyond the 32 bit boundary).
The question is why only some operations should fail if the SmallInt has more than 53 bits, or if we should consistently switch to LargeInts above that. OTOH wasting 8 bits in every SmallInt oop may not be a good tradeoff.
Doesn't feel right for me. All we're considering is a few floating-point primitives which need an additional failure case. Whereas truncating 64-bit SmallInteger has its own negative effects (microsecond clock range, many more large integers created, eg when dealing with addresses, temptation to find uses for missing bits, such as another ~2^11 tag patterns). Good to mention an alternative but feels wrong to me.
- Bert -
On Mon, Dec 29, 2014 at 01:45:17PM +0100, Bert Freudenberg wrote:
How about making SmallIntegers be 53 bits? Could the VM be as efficient if it masked off more bits?
I would expect no difference in VM performance for values within the 53 bit range. There is no additional masking required, and there would not be any need to change the internal storage format. It's really just a matter of what point you choose to begin using large integer objects rather than immediates.
IMHO the additional 8 bits we get by going to 61 bits will not significantly enhance performance. In fact, the payoff is probably rather small beyond 32 bits (but still useful e.g. for bit shifts beyond the 32 bit boundary).
I have a hard time thinking of real world cases where the extra range would have a meaningful impact. The only thing I can think of is my UTCDateAndTime implementation (http://wiki.squeak.org/squeak/6197), which is using large integers to represent microseconds since the Posix epoch. In 64 bit Spur, if immediate integers were to have a range of 52 bits, then these DateAndTime values would be using immediates for dates in the range of year 1827 through year 2112. For most practical uses within my own expected life span, this is a useful range of values.
So even for this use case (which affects almost nobody, at least not now) the reduction in range of immediate integers has little or no performance impact. Furthermore, I would note that the UTCDateAndTime implementation that uses large integers (not immediate short integers) in the current 32 bit images is already faster than the Squeak trunk DateAndTime implementation. So I have a hard time believing that extending the range of immediate short integers beyond 52 bits of precision would have any meaningful real world impact.
On Mon, Dec 29, 2014 at 02:59:07PM -0800, Eliot Miranda wrote:
On Dec 29, 2014, at 4:45 AM, Bert Freudenberg bert@freudenbergs.de wrote:
IMHO the additional 8 bits we get by going to 61 bits will not significantly enhance performance. I
n fact, the payoff is probably rather small beyond 32 bits (but still useful e.g. for bit shifts beyond the 32 bit boundary).
The question is why only some operations should fail if the SmallInt has more than 53 bits, or if w
e should consistently switch to LargeInts above that. OTOH wasting 8 bits in every SmallInt oop may not be a good tradeoff.
Doesn't feel right for me. All we're considering is a few floating-point primitives which need an ad
ditional failure case. Whereas truncating 64-bit SmallInteger has its own negative effects (microsecon d clock range, many more large integers created, eg when dealing with addresses, temptation to find use s for missing bits, such as another ~2^11 tag patterns). Good to mention an alternative but feels wron g to me.
I'm inclined to agree with Eliot on this, even though I do not think that performance would be an issue one way or the other. It seems to me that having the implementation details of floating point objects creeping into the implementation of integers is not a good thing in principle, even though in practice it would be fine.
Bert's suggestion is a simple and practical short term solution. It has no practical performance impact, and it is trivial to implement. The only serious negative is that it would require a good method comment to explain the rationale for the short integer range limitation.
However, with a bit of additional work, the range checks can be added to the floating point primitives. This seems better, because it puts the responsibility with the floating point logic, which is where it belongs.
My remaining concern would be that handling the logic in the VM (floating point primitives) is less flexible than handling it in the image. I am not really convinced that the immediate float format in Spur 64 will prove to be of practical value, so I think that maintaining some flexibility around the implmentation might be a good thing.
So my $0.02:
Do it in careful steps. Release 1 of Spur 64 could have immediate short integers in the same range as the 32-bit image. Release 2 could follow Bert's guidance. Release 3 could fix up the various float primitives and extend the range to 61 bits.
Dave
Hi David,
On Mon, Dec 29, 2014 at 7:58 PM, David T. Lewis lewis@mail.msen.com wrote:
On Mon, Dec 29, 2014 at 01:45:17PM +0100, Bert Freudenberg wrote:
How about making SmallIntegers be 53 bits? Could the VM be as efficient
if it masked off more bits?
I would expect no difference in VM performance for values within the 53 bit range. There is no additional masking required, and there would not be any need to change the internal storage format. It's really just a matter of what point you choose to begin using large integer objects rather than immediates.
IMHO the additional 8 bits we get by going to 61 bits will not
significantly enhance performance. In fact, the payoff is probably rather small beyond 32 bits (but still useful e.g. for bit shifts beyond the 32 bit boundary).
I have a hard time thinking of real world cases where the extra range would have a meaningful impact. The only thing I can think of is my UTCDateAndTime implementation (http://wiki.squeak.org/squeak/6197), which is using large integers to represent microseconds since the Posix epoch. In 64 bit Spur, if immediate integers were to have a range of 52 bits, then these DateAndTime values would be using immediates for dates in the range of year 1827 through year 2112. For most practical uses within my own expected life span, this is a useful range of values.
So even for this use case (which affects almost nobody, at least not now) the reduction in range of immediate integers has little or no performance impact. Furthermore, I would note that the UTCDateAndTime implementation that uses large integers (not immediate short integers) in the current 32 bit images is already faster than the Squeak trunk DateAndTime implementation. So I have a hard time believing that extending the range of immediate short integers beyond 52 bits of precision would have any meaningful real world impact.
On Mon, Dec 29, 2014 at 02:59:07PM -0800, Eliot Miranda wrote:
On Dec 29, 2014, at 4:45 AM, Bert Freudenberg bert@freudenbergs.de
wrote:
IMHO the additional 8 bits we get by going to 61 bits will not
significantly enhance performance. I n fact, the payoff is probably rather small beyond 32 bits (but still useful e.g. for bit shifts beyond the 32 bit boundary).
The question is why only some operations should fail if the SmallInt
has more than 53 bits, or if w e should consistently switch to LargeInts above that. OTOH wasting 8 bits in every SmallInt oop may not be a good tradeoff.
Doesn't feel right for me. All we're considering is a few
floating-point primitives which need an ad ditional failure case. Whereas truncating 64-bit SmallInteger has its own negative effects (microsecon d clock range, many more large integers created, eg when dealing with addresses, temptation to find use s for missing bits, such as another ~2^11 tag patterns). Good to mention an alternative but feels wron g to me.
I'm inclined to agree with Eliot on this, even though I do not think that performance would be an issue one way or the other. It seems to me that having the implementation details of floating point objects creeping into the implementation of integers is not a good thing in principle, even though in practice it would be fine.
Bert's suggestion is a simple and practical short term solution. It has no practical performance impact, and it is trivial to implement. The only serious negative is that it would require a good method comment to explain the rationale for the short integer range limitation.
However, with a bit of additional work, the range checks can be added to the floating point primitives. This seems better, because it puts the responsibility with the floating point logic, which is where it belongs.
The work is very small. As far as I can tell it is merely the change of
Spur64BitMemoryManager methods for interpreter access loadFloatOrIntFrom: floatOrIntOop "If floatOrInt is an integer, then convert it to a C double float and return it. If it is a Float, then load its value and return it. Otherwise fail -- ie return with primErrorCode non-zero."
<inline: true> <returnTypeC: #double> | result tagBits | <var: #result type: #double>
(tagBits := floatOrIntOop bitAnd: self tagMask) ~= 0 ifTrue: [tagBits = self smallFloatTag ifTrue: [^self smallFloatValueOf: floatOrIntOop]. tagBits = self smallIntegerTag ifTrue: [^(self integerValueOf: floatOrIntOop) asFloat]] ifFalse: [(self classIndexOf: floatOrIntOop) = ClassFloatCompactIndex ifTrue: [self cCode: '' inSmalltalk: [result := Float new: 2]. self fetchFloatAt: floatOrIntOop + self baseHeaderSize into: result. ^result]]. coInterpreter primitiveFail. ^0.0
to loadFloatOrIntFrom: floatOrIntOop "If floatOrInt is an integer, then convert it to a C double float and return it. If it is a Float, then load its value and return it. Otherwise fail -- ie return with primErrorCode non-zero."
<inline: true> <returnTypeC: #double> | result tagBits shift | <var: #result type: #double>
(tagBits := floatOrIntOop bitAnd: self tagMask) ~= 0 ifTrue: [tagBits = self smallFloatTag ifTrue: [^self smallFloatValueOf: floatOrIntOop]. (tagBits = self smallIntegerTag and: [shift := 64 - self numTagBits - self smallFloatMantissaBits. (self cCode: [floatOrIntOop << shift] inSmalltalk: [floatOrIntOop << shift bitAnd: 1 << 64 - 1]) >> shift = floatOrIntOop]) ifTrue: [^(self integerValueOf: floatOrIntOop) asFloat]] ifFalse: [(self classIndexOf: floatOrIntOop) = ClassFloatCompactIndex ifTrue: [self cCode: '' inSmalltalk: [result := Float new: 2]. self fetchFloatAt: floatOrIntOop + self baseHeaderSize into: result. ^result]]. coInterpreter primitiveFail. ^0.0
i.e. tagBits = self smallIntegerTag ifTrue: [^(self integerValueOf: floatOrIntOop) asFloat]
becomes
(tagBits = self smallIntegerTag and: [shift := 64 - self numTagBits - self smallFloatMantissaBits. (self cCode: [floatOrIntOop << shift] inSmalltalk: [floatOrIntOop << shift bitAnd: 1 << 64 - 1]) >> shift = floatOrIntOop]) ifTrue: [^(self integerValueOf: floatOrIntOop) asFloat]
My remaining concern would be that handling the logic in the VM (floating point primitives) is less flexible than handling it in the image. I am not really convinced that the immediate float format in Spur 64 will prove to be of practical value, so I think that maintaining some flexibility around the implmentation might be a good thing.
Based on the experience with 64-bit VisualWorks I can assure you it has an impact. Floating point arithmetic will be considerably faster (2 to 3x) and allocations will go way down.
So my $0.02:
Do it in careful steps. Release 1 of Spur 64 could have immediate short integers in the same range as the 32-bit image. Release 2 could follow Bert's guidance. Release 3 could fix up the various float primitives and extend the range to 61 bits.
I'm not going to go this way. I'm going to stick with 61-bit SmallIntegers and SmallFloat64s. We have a comprehensive test suite and this is core behaviour. t's very easy to find issues in this area so, given that the code has been written and is working, there's no case for backing out. There's really nothing to be gained by being conservative here. Nicolas' concern has been addressed and tests can be written to check. We can have our cake and eat it to.
Happy New Year!
Dave
On Thu, Jan 01, 2015 at 12:59:03PM -0800, Eliot Miranda wrote:
My remaining concern would be that handling the logic in the VM (floating point primitives) is less flexible than handling it in the image. I am not really convinced that the immediate float format in Spur 64 will prove to be of practical value, so I think that maintaining some flexibility around the implmentation might be a good thing.
Based on the experience with 64-bit VisualWorks I can assure you it has an impact. Floating point arithmetic will be considerably faster (2 to 3x) and allocations will go way down.
D'oh! I see it now. I was completely overlooking the impact of allocating those objects. I'm sure you are right.
So my $0.02:
Do it in careful steps. Release 1 of Spur 64 could have immediate short integers in the same range as the 32-bit image. Release 2 could follow Bert's guidance. Release 3 could fix up the various float primitives and extend the range to 61 bits.
I'm not going to go this way. I'm going to stick with 61-bit SmallIntegers and SmallFloat64s. We have a comprehensive test suite and this is core behaviour. t's very easy to find issues in this area so, given that the code has been written and is working, there's no case for backing out. There's really nothing to be gained by being conservative here. Nicolas' concern has been addressed and tests can be written to check. We can have our cake and eat it to.
Happy New Year!
Happy New Year indeed!
It has been quite remarkable to see the proceeding development of Cog, Spur and Sista. All the more so in that the work is being done openly, implemented in Smalltalk, and with an open and public exchange of ideas. I think that you are leading some remarkable work here, and 2015 promises to be another very interesting year.
Thanks, Dave
vm-dev@lists.squeakfoundation.org