florin.mateoc at gmail.com
Thu Mar 10 01:27:13 UTC 2016
On 3/9/2016 8:23 PM, Eliot Miranda wrote:
> Hi Florin,
> I believe the correct fix is for ObjectMemory needs to decompose fetchLong64:ofObject: into two 32-but reads unless BytesPerWord = 8. I'll commit asap (which is once I have 64-bit small float tagging converted). But your fix should keep you going until then.
> _,,,^..^,,,_ (phone)
I don't understand how two 32-bit reads can take care of 5-byte long largeIntegers, but you know best (usually :))
>> On Mar 9, 2016, at 1:53 PM, Florin Mateoc <florin.mateoc at gmail.com> wrote:
>>> On 3/9/2016 3:17 PM, Florin Mateoc wrote:
>>> Hi again,
>>> I think I found the bug: in method InterpreterPrimitives>>signed64BitValueOf: there seems to be an assumption (even
>>> mentioned in the method comment) that (on 32bit machines) largeIntegers have to be either 4 or 8 bytes.
>>> In this case we get a 5byte largeInteger, so we get the error. What I don't understand is where does this assumption
>>> come from, because it does not seem limited to this method.
>>> Also note that on BigEndian machines the code does not act upon this assumption, so it would not fail.
>>> Actually, I suspect that the assumption comes from "generalizing" the 32-bit one, since the methods seem to be copied
>>> and pasted.
>>> For the 32bit variant, the comment stated that "The object may be either a positive SmallInteger or a four-byte
>>> LargeInteger". But in this case it was correct, anything less than 4 bytes would not be a LargeInteger. When moving to
>>> 64bit, the same does not hold true. We can have largeIntegers with 4,5,6,7 or 8 bytes fitting in 64 bits.
>>> Also, speaking of BigEndian, it seems that, in the same class, the methods #magnitude64BitValueOf: and
>>> #positive64BitValueOf: do not take care of the BigEndian case.
More information about the Vm-dev