eliot.miranda at gmail.com
Fri Mar 11 22:47:18 UTC 2016
On Fri, Mar 11, 2016 at 6:40 AM, Florin Mateoc <florin.mateoc at gmail.com>
> On 3/11/2016 8:28 AM, David T. Lewis wrote:
> > On Fri, Mar 11, 2016 at 12:04:22AM -0800, Eliot Miranda wrote:
> >> This is where it gets tricky. The implementations of longAt:[put:] et
> al in the subclasses are only for simulation. The real ones are in
> platforms/Cross/vm/sqMemoryAccess.h and depend on, or rather are chosen to
> deal with, the semantics of the actual underlying machine, what its word
> size, endianness and alignment restrictions are. The check for alignment
> above therefore serves to enforce the constraints that the real versions
> obey on actual hardware. Hence removing that alignment check would only be
> valid on 32-bit machines that allowed unaligned 64-but access, a shrinking
> set these days that doesn't even include x86 in its sse instructions.
> > You may want to look at package MemoryAccess in the VMMaker repository. I
> > have not integrated into the oscog branch, but that could probably be
> > without too much work.
> > This is an implementation of the sqMemoryAccess.h macros written entirely
> > in slang. That means that there is no hidden CPP magic. It is written
> > in Smalltalk, and the "simulated" macros are what actually gets
> > to C. With full inlining, performance is about the same as the CPP macros
> > (or at least it was the last time I checked it).
> > I wrote the package when was working out the 32/64 bit image and host
> > combinations for VMM trunk. I was finding the CPP macros rather
> > so it helped be able to work with them in Smalltalk rather than try to
> > guess what the macros were going to do.
> > Dave
> Thanks guys, this gets closer and closer to what I am trying to
> understand: how good is the match between the simulation
> and the translated version and where does it possibly break down?
> I had this idea that, since translation necessarily works with a frozen
> and closed-world assumption, plus there is no
> real compile-time pressure, this could be used to translate almost normal
> Smalltalk instead of Slang and thus
> potentially bring even the static VM development to the masses, as a
> complement to Sista. But of course, if the
> simulation is only a best-efforts approximation, this may not work very
The simulation is pretty close. The JIT's simulation closer. This is
simply history, plus the fact that debugging a JIT is more difficult and
therefore more accurate simulation helps.
But if you're interested in closed world translations you might contact
Gerardo Richarte and Xavier Burroni and ask them about their JIT work.
In addition to the general question above, to help me correct some of my
> misunderstandings/intuitions, which are solely
> based on reading this relatively obscure code, can you please elucidate a
> couple of mysteries for me?
> 1. Was this fetchLong64: bug a V3 VM bug as well or just a simulation bug?
Just a simulation bug. The code generated for the real VM is correct.
> 2. Why do magnitude64BitValueOf: and positive64BitValueOf: not
> differentiate between BigEndian and LittleEndian?
They don't have to. Large Integers in Smalltalk are always little endian.
Nicolas Cellier has a prototype that would organize then as a sequence of
32-bit words, in which case they would be endian-dependent, but this is
problematic because in 32-bit systems SmallInteger is only 31 bits, and so
accessing the one and only 32-bit word of the large integer 16rFFFFFFFF
answers that large integer itself. Further, organizing it as a sequence of
16-bit words doesn't help either because 16rFFFF * 16rFFFF overflows 31-bit
SmallIntegers. so if one wants to be able to implement large integer
arithmetic non-primitively in the image using SmallIntegers, 8 bits is the
largest convenient unit.
> 3. Why do we have both primitives above? They both seem used, but why do
> some primitives use the first one, coupled with
> separate calls for sign, instead of just using the signed version (see
Some large integer primitives want to interpret the bit patterns in large
integers as bit patterns and use magnitude64BitValueOf:. So,e primitives
want to interpret the bit patterns in large integers as arithmetic values,
failing for LargeNegativeInteger, and use positive64BitValueOf:. Perhaps
positive64BitValueOf: is surplus to requirements, but it fits with
the existing set. BTW, they are /not/ primitives. They are run-time
support functions. Let's not get confused.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Vm-dev