[Vm-dev] VM Maker: VMMaker.oscog-cb.1236.mcz

Eliot Miranda eliot.miranda at gmail.com
Wed Apr 22 19:04:00 UTC 2015


On Wed, Apr 22, 2015 at 10:18 AM, Eliot Miranda <eliot.miranda at gmail.com>
wrote:

>
>
> On Wed, Apr 22, 2015 at 2:07 AM, Clément Bera <bera.clement at gmail.com>
> wrote:
>
>>
>> Eliot here's a good example to stress the register allocation:
>>
>> Integer>>#regStress
>> | t t2 |
>> t := self yourself.
>> t2 := self + 1.
>> ^ { t == t2 .   t == t2 .  t == t2 .  t == t2 .  t == t2 .  t == t2 .  t
>> == t2 }
>>
>> I think the resulting machine code method is beautiful, it needs to spill
>> only when it runs out of registers :-). Of course it makes sense mainly
>> when you use inlined bytecodes instead of only #==.
>>
>
I just looked at the code.  Gorgeous!

Some extra register moves are done because the JIT does remember if a
>> temporary value is currently in a register (It moves each time the temp t
>> to the same register whereas the register value is not changed). Maybe we
>> should add a feature that remembers if temporaries are currently in a
>> register, and if so, when doing push temp, only push the register directly
>> in the simStack, and in case of temporary store or stack flush the register
>> associated with a temp is not valid anymore somehow...
>>
>
> Yes, that would be ideal.  This would benefit from a full merge of the
> simstack as we discussed because otherwise the lifetime is likely to be
> limited.  I've had the vague desire for stack slots to be mapped to
> registers, for example with the floating-point stack-to-registert mapping
> we've discussed.  Before I'd though that to do it for normal temporaries
> the Cogit would have to decide how many registers it wanted to allocate to
> temporary variables and how many to expressions, for example in the initial
> scan pass.  But that's a poor idea and unlikely to work well.  If you can
> instead do it as part of normal register allocation that would (as I said
> earlier) be ideal.  This is very exciting.
>
> Let me know when you want the full merge support.
>
>
>
>>
>> 2015-04-22 10:08 GMT+02:00 <commits at source.squeak.org>:
>>
>>>
>>> Eliot Miranda uploaded a new version of VMMaker to project VM Maker:
>>> http://source.squeak.org/VMMaker/VMMaker.oscog-cb.1236.mcz
>>>
>>> ==================== Summary ====================
>>>
>>> Name: VMMaker.oscog-cb.1236
>>> Author: cb
>>> Time: 22 April 2015, 10:07:41.028 am
>>> UUID: cf4af270-9dfa-4f2a-b84c-0cdb3c5f4913
>>> Ancestors: VMMaker.oscog-tpr.1235
>>>
>>> changed the names of register allocation methods for more explict names,
>>> for instance, allocateOneReg -> allocateRegForStackTop
>>>
>>> Fixed a bug where not the best register was allocated in #== (was
>>> allocating a reg based on stackTop value instead of ssValue: 1)
>>>
>>> =============== Diff against VMMaker.oscog-tpr.1235 ===============
>>>
>>> Item was changed:
>>>   ----- Method:
>>> SistaStackToRegisterMappingCogit>>genExtJumpIfNotInstanceOfBehaviorsOrPopBytecode
>>> (in category 'bytecode generators') -----
>>>   genExtJumpIfNotInstanceOfBehaviorsOrPopBytecode
>>>         "SistaV1: *     254             11111110        kkkkkkkk
>>> jjjjjjjj                branch If Not Instance Of Behavior/Array Of
>>> Behavior kkkkkkkk (+ Extend A * 256, where Extend A >= 0) distance jjjjjjjj
>>> (+ Extend B * 256, where Extend B >= 0)"
>>>
>>>         | reg literal distance targetFixUp |
>>>
>>> +       reg := self allocateRegForStackTopEntry.
>>> -       reg := self allocateOneRegister.
>>>         self ssTop popToReg: reg.
>>>
>>>         literal := self getLiteral: (extA * 256 + byte1).
>>>         extA := 0.
>>>         distance := extB * 256 + byte2.
>>>         extB := 0.
>>>
>>>         targetFixUp := (self ensureFixupAt: bytecodePC + 3 + distance -
>>> initialPC) asUnsignedInteger.
>>>
>>>         (objectMemory isArrayNonImm: literal)
>>>                 ifTrue: [objectRepresentation branchIf: reg
>>> notInstanceOfBehaviors: literal target: targetFixUp]
>>>                 ifFalse: [objectRepresentation branchIf: reg
>>> notInstanceOfBehavior: literal target: targetFixUp].
>>>
>>>         self genPopStackBytecode.
>>>
>>>         ^0!
>>>
>>> Item was changed:
>>>   ----- Method: SistaStackToRegisterMappingCogit>>genJumpIf:to: (in
>>> category 'bytecode generator support') -----
>>>   genJumpIf: boolean to: targetBytecodePC
>>>         "The heart of performance counting in Sista.  Conditional
>>> branches are 6 times less
>>>          frequent than sends and can provide basic block frequencies
>>> (send counters can't).
>>>          Each conditional has a 32-bit counter split into an upper 16
>>> bits counting executions
>>>          and a lower half counting untaken executions of the branch.
>>> Executing the branch
>>>          decrements the upper half, tripping if the count goes
>>> negative.  Not taking the branch
>>>          decrements the lower half.  N.B. We *do not* eliminate dead
>>> branches (true ifTrue:/true ifFalse:)
>>>          so that scanning for send and branch data is simplified and
>>> that branch data is correct."
>>>         <inline: false>
>>>         | desc ok counterAddress countTripped retry counterReg |
>>>         <var: #ok type: #'AbstractInstruction *'>
>>>         <var: #desc type: #'CogSimStackEntry *'>
>>>         <var: #retry type: #'AbstractInstruction *'>
>>>         <var: #countTripped type: #'AbstractInstruction *'>
>>>
>>>         (coInterpreter isOptimizedMethod: methodObj) ifTrue: [ ^ super
>>> genJumpIf: boolean to: targetBytecodePC ].
>>>
>>>         self ssFlushTo: simStackPtr - 1.
>>>         desc := self ssTop.
>>>         self ssPop: 1.
>>>         desc popToReg: TempReg.
>>>
>>> +       counterReg := self allocateAnyReg.
>>> -       counterReg := self allocateRegisterNotConflictingWith: 0.
>>>         counterAddress := counters + ((self sizeof: #sqInt) *
>>> counterIndex).
>>>         counterIndex := counterIndex + 1.
>>>         self flag: 'will need to use MoveAw32:R: if 64 bits'.
>>>         self assert: objectMemory wordSize = CounterBytes.
>>>         retry := self MoveAw: counterAddress R: counterReg.
>>>         self SubCq: 16r10000 R: counterReg. "Count executed"
>>>         "Don't write back if we trip; avoids wrapping count back to
>>> initial value, and if we trip we don't execute."
>>>         countTripped := self JumpCarry: 0.
>>>         self MoveR: counterReg Aw: counterAddress. "write back"
>>>
>>>         "Cunning trick by LPD.  If true and false are contiguous
>>> subtract the smaller.
>>>          Correct result is either 0 or the distance between them.  If
>>> result is not 0 or
>>>          their distance send mustBeBoolean."
>>>         self assert: (objectMemory objectAfter: objectMemory
>>> falseObject) = objectMemory trueObject.
>>>         self annotate: (self SubCw: boolean R: TempReg) objRef: boolean.
>>>         self JumpZero: (self ensureFixupAt: targetBytecodePC -
>>> initialPC).
>>>
>>>         self SubCq: 1 R: counterReg. "Count untaken"
>>>         self MoveR: counterReg Aw: counterAddress. "write back"
>>>
>>>         self CmpCq: (boolean == objectMemory falseObject
>>>                                         ifTrue: [objectMemory trueObject
>>> - objectMemory falseObject]
>>>                                         ifFalse: [objectMemory
>>> falseObject - objectMemory trueObject])
>>>                 R: TempReg.
>>>         ok := self JumpZero: 0.
>>>         self MoveCq: 0 R: counterReg. "if counterReg is 0 this is a
>>> mustBeBoolean, not a counter trip."
>>>         countTripped jmpTarget:
>>>                 (self CallRT: (boolean == objectMemory falseObject
>>>                                                 ifTrue:
>>> [ceSendMustBeBooleanAddFalseTrampoline]
>>>                                                 ifFalse:
>>> [ceSendMustBeBooleanAddTrueTrampoline])).
>>>         "If we're in an image which hasn't got the Sista code loaded
>>> then the ceCounterTripped:
>>>          trampoline will return directly to machine code, returning the
>>> boolean.  So the code should
>>>          jump back to the retry point. The trampoline makes sure that
>>> TempReg has been reloaded."
>>>         self annotateBytecode: self Label.
>>>         self Jump: retry.
>>>         ok jmpTarget: self Label.
>>>         ^0!
>>>
>>> Item was changed:
>>>   ----- Method:
>>> SistaStackToRegisterMappingCogit>>genSpecialSelectorComparison (in category
>>> 'bytecode generators') -----
>>>   genSpecialSelectorComparison
>>>         "Override to count inlined branches if followed by a conditional
>>> branch.
>>>          We borrow the following conditional branch's counter and when
>>> about to
>>>          inline the comparison we decrement the counter (without writing
>>> it back)
>>>          and if it trips simply abort the inlining, falling back to the
>>> normal send which
>>>          will then continue to the conditional branch which will trip
>>> and enter the abort."
>>>         | nextPC postBranchPC targetBytecodePC primDescriptor
>>> branchDescriptor nExts
>>>           rcvrIsInt argIsInt rcvrInt argInt result jumpNotSmallInts
>>> inlineCAB annotateInst
>>>           counterAddress countTripped counterReg |
>>>         <var: #countTripped type: #'AbstractInstruction *'>
>>>         <var: #primDescriptor type: #'BytecodeDescriptor *'>
>>>         <var: #jumpNotSmallInts type: #'AbstractInstruction *'>
>>>         <var: #branchDescriptor type: #'BytecodeDescriptor *'>
>>>
>>>         (coInterpreter isOptimizedMethod: methodObj) ifTrue: [ ^ super
>>> genSpecialSelectorComparison ].
>>>
>>>         self ssFlushTo: simStackPtr - 2.
>>>         primDescriptor := self generatorAt: byte0.
>>>         argIsInt := self ssTop type = SSConstant
>>>                                  and: [objectMemory isIntegerObject:
>>> (argInt := self ssTop constant)].
>>>         rcvrIsInt := (self ssValue: 1) type = SSConstant
>>>                                  and: [objectMemory isIntegerObject:
>>> (rcvrInt := (self ssValue: 1) constant)].
>>>
>>>         "short-cut the jump if operands are SmallInteger constants."
>>>         (argIsInt and: [rcvrIsInt]) ifTrue:
>>>                 [self cCode: '' inSmalltalk: "In Simulator ints are
>>> unsigned..."
>>>                                 [rcvrInt := objectMemory integerValueOf:
>>> rcvrInt.
>>>                                 argInt := objectMemory integerValueOf:
>>> argInt].
>>>                  primDescriptor opcode caseOf: {
>>>                         [JumpLess]                              ->
>>> [result := rcvrInt < argInt].
>>>                         [JumpLessOrEqual]               -> [result :=
>>> rcvrInt <= argInt].
>>>                         [JumpGreater]                   -> [result :=
>>> rcvrInt > argInt].
>>>                         [JumpGreaterOrEqual]    -> [result := rcvrInt >=
>>> argInt].
>>>                         [JumpZero]                              ->
>>> [result := rcvrInt = argInt].
>>>                         [JumpNonZero]                   -> [result :=
>>> rcvrInt ~= argInt] }.
>>>                  "Must enter any annotatedConstants into the map"
>>>                  self annotateBytecodeIfAnnotated: (self ssValue: 1).
>>>                  self annotateBytecodeIfAnnotated: self ssTop.
>>>                  "Must annotate the bytecode for correct pc mapping."
>>>                  self ssPop: 2.
>>>                  ^self ssPushAnnotatedConstant: (result
>>>
>>>                 ifTrue: [objectMemory trueObject]
>>>
>>>                 ifFalse: [objectMemory falseObject])].
>>>
>>>         nextPC := bytecodePC + primDescriptor numBytes.
>>>         nExts := 0.
>>>         [branchDescriptor := self generatorAt: (objectMemory fetchByte:
>>> nextPC ofObject: methodObj) + (byte0 bitAnd: 256).
>>>          branchDescriptor isExtension] whileTrue:
>>>                 [nExts := nExts + 1.
>>>                  nextPC := nextPC + branchDescriptor numBytes].
>>>         "Only interested in inlining if followed by a conditional
>>> branch."
>>>         inlineCAB := branchDescriptor isBranchTrue or: [branchDescriptor
>>> isBranchFalse].
>>>         "Further, only interested in inlining = and ~= if there's a
>>> SmallInteger constant involved.
>>>          The relational operators successfully statically predict
>>> SmallIntegers; the equality operators do not."
>>>         (inlineCAB and: [primDescriptor opcode = JumpZero or:
>>> [primDescriptor opcode = JumpNonZero]]) ifTrue:
>>>                 [inlineCAB := argIsInt or: [rcvrIsInt]].
>>>         inlineCAB ifFalse:
>>>                 [^self genSpecialSelectorSend].
>>>
>>>         targetBytecodePC := nextPC
>>>                                                         +
>>> branchDescriptor numBytes
>>>                                                         + (self spanFor:
>>> branchDescriptor at: nextPC exts: nExts in: methodObj).
>>>         postBranchPC := nextPC + branchDescriptor numBytes.
>>>         argIsInt
>>>                 ifTrue:
>>>                         [(self ssValue: 1) popToReg: ReceiverResultReg.
>>>                          annotateInst := self ssTop annotateUse.
>>>                          self ssPop: 2.
>>>                          self MoveR: ReceiverResultReg R: TempReg]
>>>                 ifFalse:
>>>                         [self marshallSendArguments: 1.
>>>                          self MoveR: Arg0Reg R: TempReg.
>>>                          rcvrIsInt ifFalse:
>>>                                 [objectRepresentation
>>> isSmallIntegerTagNonZero
>>>                                         ifTrue: [self AndR:
>>> ReceiverResultReg R: TempReg]
>>>                                         ifFalse: [self OrR:
>>> ReceiverResultReg R: TempReg]]].
>>>         jumpNotSmallInts := objectRepresentation
>>> genJumpNotSmallIntegerInScratchReg: TempReg.
>>>
>>> +       counterReg := self allocateRegNotConflictingWith: (self
>>> registerMaskFor: ReceiverResultReg and: Arg0Reg). "Use this as the count
>>> reg, can't conflict with the registers for the arg and the receiver"
>>> -       counterReg := self allocateRegisterNotConflictingWith: (self
>>> registerMaskFor: ReceiverResultReg and: Arg0Reg). "Use this as the count
>>> reg, can't conflict with the registers for the arg and the receiver"
>>>         self ssAllocateRequiredReg: counterReg. "Use this as the count
>>> reg."
>>>         counterAddress := counters + ((self sizeof: #sqInt) *
>>> counterIndex).
>>>         self flag: 'will need to use MoveAw32:R: if 64 bits'.
>>>         self assert: objectMemory wordSize = CounterBytes.
>>>         self MoveAw: counterAddress R: counterReg.
>>>         self SubCq: 16r10000 R: counterReg. "Count executed"
>>>         "If counter trips simply abort the inlined comparison and send
>>> continuing to the following
>>>          branch *without* writing back.  A double decrement will not
>>> trip the second time."
>>>         countTripped := self JumpCarry: 0.
>>>         self MoveR: counterReg Aw: counterAddress. "write back"
>>>
>>>         argIsInt
>>>                 ifTrue: [annotateInst
>>>                                         ifTrue: [self annotateBytecode:
>>> (self CmpCq: argInt R: ReceiverResultReg)]
>>>                                         ifFalse: [self CmpCq: argInt R:
>>> ReceiverResultReg]]
>>>                 ifFalse: [self CmpR: Arg0Reg R: ReceiverResultReg].
>>>         "Cmp is weird/backwards so invert the comparison.  Further since
>>> there is a following conditional
>>>          jump bytecode define non-merge fixups and leave the cond
>>> bytecode to set the mergeness."
>>>         self gen: (branchDescriptor isBranchTrue
>>>                                 ifTrue: [primDescriptor opcode]
>>>                                 ifFalse: [self inverseBranchFor:
>>> primDescriptor opcode])
>>>                 operand: (self ensureNonMergeFixupAt: targetBytecodePC -
>>> initialPC) asUnsignedInteger.
>>>         self SubCq: 1 R: counterReg. "Count untaken"
>>>         self MoveR: counterReg Aw: counterAddress. "write back"
>>>         self Jump: (self ensureNonMergeFixupAt: postBranchPC -
>>> initialPC).
>>>         countTripped jmpTarget: (jumpNotSmallInts jmpTarget: self Label).
>>>         argIsInt ifTrue:
>>>                 [self MoveCq: argInt R: Arg0Reg].
>>>         ^self genMarshalledSend: (coInterpreter specialSelector: byte0 -
>>> self firstSpecialSelectorBytecodeOffset)
>>>                 numArgs: 1
>>>                 sendTable: ordinarySendTrampolines!
>>>
>>> Item was changed:
>>>   ----- Method:
>>> SistaStackToRegisterMappingCogit>>genSpecialSelectorEqualsEqualsWithForwarders
>>> (in category 'bytecode generators') -----
>>>   genSpecialSelectorEqualsEqualsWithForwarders
>>>         "Override to count inlined branches if followed by a conditional
>>> branch.
>>>          We borrow the following conditional branch's counter and when
>>> about to
>>>          inline the comparison we decrement the counter (without writing
>>> it back)
>>>          and if it trips simply abort the inlining, falling back to the
>>> normal send which
>>>          will then continue to the conditional branch which will trip
>>> and enter the abort."
>>>         | nextPC postBranchPC targetBytecodePC primDescriptor
>>> branchDescriptor nExts label counterReg fixup
>>>           counterAddress countTripped unforwardArg unforwardRcvr argReg
>>> rcvrReg regMask |
>>>         <var: #fixup type: #'BytecodeFixup *'>
>>>         <var: #countTripped type: #'AbstractInstruction *'>
>>>         <var: #primDescriptor type: #'BytecodeDescriptor *'>
>>>         <var: #branchDescriptor type: #'BytecodeDescriptor *'>
>>>
>>>         ((coInterpreter isOptimizedMethod: methodObj) or: [needsFrame
>>> not]) ifTrue: [ ^ super genSpecialSelectorEqualsEquals ].
>>>
>>>         primDescriptor := self generatorAt: byte0.
>>>         regMask := 0.
>>>
>>>         nextPC := bytecodePC + primDescriptor numBytes.
>>>         nExts := 0.
>>>         [branchDescriptor := self generatorAt: (objectMemory fetchByte:
>>> nextPC ofObject: methodObj) + (byte0 bitAnd: 256).
>>>          branchDescriptor isExtension] whileTrue:
>>>                 [nExts := nExts + 1.
>>>                  nextPC := nextPC + branchDescriptor numBytes].
>>>
>>>         (branchDescriptor isBranchTrue or: [branchDescriptor
>>> isBranchFalse]) ifTrue:
>>>                 [self ssFlushTo: simStackPtr - 2].
>>>
>>>         unforwardRcvr := (objectRepresentation isUnannotatableConstant:
>>> (self ssValue: 1)) not.
>>>         unforwardArg := (objectRepresentation isUnannotatableConstant:
>>> self ssTop) not.
>>>
>>>         "if the rcvr or the arg is an annotable constant, we need to
>>> push it to a register
>>>         else the forwarder check can't jump back to the comparison after
>>> unforwarding the constant"
>>>         unforwardArg
>>>                 ifTrue:
>>>                         [unforwardRcvr
>>>                                 ifTrue:
>>> +                                       [self
>>> allocateRegForStackTopTwoEntriesInto: [:rTop :rNext| argReg := rTop.
>>> rcvrReg := rNext].
>>> -                                       [self allocateTwoRegistersInto:
>>> [:rTop :rNext| argReg := rTop. rcvrReg := rNext].
>>>                                          self ssTop popToReg: argReg.
>>>                                          (self ssValue:1) popToReg:
>>> rcvrReg]
>>>                                 ifFalse:
>>> +                                       [argReg := self
>>> allocateRegForStackTopEntry.
>>> -                                       [argReg := self
>>> allocateOneRegister.
>>>                                          self ssTop popToReg: argReg]]
>>>                 ifFalse:
>>>                         [self assert: unforwardRcvr.
>>> +                        rcvrReg := self allocateRegForStackEntryAt: 1.
>>> -                        rcvrReg := self allocateOneRegister.
>>>                          (self ssValue:1) popToReg: rcvrReg].
>>>
>>>         argReg ifNotNil: [ regMask := self registerMaskFor: regMask ].
>>>         rcvrReg ifNotNil: [ regMask := regMask bitOr: (self
>>> registerMaskFor: rcvrReg) ].
>>>
>>>         "Here we can use Cq because the constant does not need to be
>>> annotated"
>>>         self assert: (unforwardArg not or: [argReg notNil]).
>>>         self assert: (unforwardRcvr not or: [rcvrReg notNil]).
>>>
>>>         "Only interested in inlining if followed by a conditional
>>> branch."
>>>         (branchDescriptor isBranchTrue or: [branchDescriptor
>>> isBranchFalse]) ifFalse:
>>>                 [^ self genDirectEqualsEqualsArg: unforwardArg rcvr:
>>> unforwardRcvr argReg: argReg rcvrReg: rcvrReg].
>>>
>>>         targetBytecodePC := nextPC
>>>                                                         +
>>> branchDescriptor numBytes
>>>                                                         + (self spanFor:
>>> branchDescriptor at: nextPC exts: nExts in: methodObj).
>>>         postBranchPC := nextPC + branchDescriptor numBytes.
>>>
>>> +       counterReg := self allocateRegNotConflictingWith: regMask. "Use
>>> this as the count reg, can't conflict with the registers for the arg and
>>> the receiver of #==."
>>> -       counterReg := self allocateRegisterNotConflictingWith: regMask.
>>> "Use this as the count reg, can't conflict with the registers for the arg
>>> and the receiver of #==."
>>>         counterAddress := counters + ((self sizeof: #sqInt) *
>>> counterIndex).
>>>         self flag: 'will need to use MoveAw32:R: if 64 bits'.
>>>         self assert: objectMemory wordSize = CounterBytes.
>>>         self MoveAw: counterAddress R: counterReg.
>>>         self SubCq: 16r10000 R: counterReg. "Count executed"
>>>         "If counter trips simply abort the inlined comparison and send
>>> continuing to the following
>>>          branch *without* writing back.  A double decrement will not
>>> trip the second time."
>>>         countTripped := self JumpCarry: 0.
>>>         self MoveR: counterReg Aw: counterAddress. "write back"
>>>
>>>         self assert: (unforwardArg or: [ unforwardRcvr ]).
>>>
>>>         label := self Label.
>>>
>>>         unforwardArg
>>>                 ifFalse: [ self CmpCq: self ssTop constant R: rcvrReg ]
>>>                 ifTrue: [ unforwardRcvr
>>>                         ifFalse: [ self CmpCq: (self ssValue: 1)
>>> constant R: argReg ]
>>>                         ifTrue: [ self CmpR: argReg R: rcvrReg ] ].
>>>
>>>         self ssPop: 2.
>>>         branchDescriptor isBranchTrue ifTrue:
>>>                 [ fixup := self ensureNonMergeFixupAt: postBranchPC -
>>> initialPC.
>>>                 self JumpZero: (self ensureNonMergeFixupAt:
>>> targetBytecodePC - initialPC) asUnsignedInteger.
>>>                 unforwardArg ifTrue: [ objectRepresentation
>>> genEnsureOopInRegNotForwarded: argReg scratchReg: TempReg jumpBackTo:
>>> label  ].
>>>                 unforwardRcvr ifTrue: [ objectRepresentation
>>> genEnsureOopInRegNotForwarded: rcvrReg scratchReg: TempReg jumpBackTo:
>>> label ] ].
>>>         branchDescriptor isBranchFalse ifTrue:
>>>                 [ fixup := self ensureNonMergeFixupAt: targetBytecodePC
>>> - initialPC.
>>>                 self JumpZero: (self ensureNonMergeFixupAt: postBranchPC
>>> - initialPC) asUnsignedInteger.
>>>                 unforwardArg ifTrue: [ objectRepresentation
>>> genEnsureOopInRegNotForwarded: argReg scratchReg: TempReg jumpBackTo: label
>>> ].
>>>                 unforwardRcvr ifTrue: [ objectRepresentation
>>> genEnsureOopInRegNotForwarded: rcvrReg scratchReg: TempReg jumpBackTo:
>>> label ] ].
>>>         self ssPop: -2.
>>>
>>>         "the jump has not been taken and forwarders have been followed."
>>>         self SubCq: 1 R: counterReg. "Count untaken"
>>>         self MoveR: counterReg Aw: counterAddress. "write back"
>>>         self Jump: (self ensureNonMergeFixupAt: postBranchPC -
>>> initialPC).
>>>
>>>         countTripped jmpTarget: self Label.
>>>
>>>         "inlined version of #== ignoring the branchDescriptor if the
>>> counter trips to have normal state for the optimizer"
>>>         ^ self genDirectEqualsEqualsArg: unforwardArg rcvr:
>>> unforwardRcvr argReg: argReg rcvrReg: rcvrReg!
>>>
>>> Item was added:
>>> + ----- Method: StackToRegisterMappingCogit>>allocateAnyReg (in category
>>> 'simulation stack') -----
>>> + allocateAnyReg
>>> +       < inline: true >
>>> +       ^ self allocateRegNotConflictingWith: 0!
>>>
>>> Item was removed:
>>> - ----- Method: StackToRegisterMappingCogit>>allocateOneRegister (in
>>> category 'simulation stack') -----
>>> - allocateOneRegister
>>> -
>>> -       self ssTop type = SSRegister ifTrue: [ ^ self ssTop register].
>>> -
>>> -       ^ self allocateRegisterNotConflictingWith: 0
>>> -       !
>>>
>>> Item was added:
>>> + ----- Method: StackToRegisterMappingCogit>>allocateRegForStackEntryAt:
>>> (in category 'simulation stack') -----
>>> + allocateRegForStackEntryAt: index
>>> +       <inline: true>
>>> +       <var: #stackEntry type: #'CogSimStackEntry *'>
>>> +       | stackEntry |
>>> +       stackEntry := self ssValue: index.
>>> +       stackEntry type = SSRegister ifTrue: [ ^ stackEntry register].
>>> +       ^ self allocateAnyReg
>>> +       !
>>>
>>> Item was added:
>>> + ----- Method: StackToRegisterMappingCogit>>allocateRegForStackTopEntry
>>> (in category 'simulation stack') -----
>>> + allocateRegForStackTopEntry
>>> +       ^ self allocateRegForStackEntryAt: 0
>>> +       !
>>>
>>> Item was added:
>>> + ----- Method:
>>> StackToRegisterMappingCogit>>allocateRegForStackTopThreeEntriesInto:thirdIsReceiver:
>>> (in category 'simulation stack') -----
>>> + allocateRegForStackTopThreeEntriesInto: trinaryBlock thirdIsReceiver:
>>> thirdIsReceiver
>>> +       <inline: true>
>>> +       | topRegistersMask rTop rNext rThird |
>>> +
>>> +       topRegistersMask := 0.
>>> +
>>> +       (self ssTop type = SSRegister and: [ thirdIsReceiver not or: [
>>> self ssTop register ~= ReceiverResultReg ] ]) ifTrue:
>>> +               [ topRegistersMask := self registerMaskFor: (rTop :=
>>> self ssTop register)].
>>> +       ((self ssValue: 1) type = SSRegister and: [ thirdIsReceiver not
>>> or: [ (self ssValue: 1) register ~= ReceiverResultReg ] ]) ifTrue:
>>> +               [ topRegistersMask := topRegistersMask bitOr: (self
>>> registerMaskFor: (rNext := (self ssValue: 1) register))].
>>> +       ((self ssValue: 2) type = SSRegister and: [thirdIsReceiver not
>>> or: [ (self ssValue: 2) register = ReceiverResultReg ] ]) ifTrue:
>>> +               [ topRegistersMask := topRegistersMask bitOr: (self
>>> registerMaskFor: (rThird := (self ssValue: 2) register))].
>>> +
>>> +       rThird ifNil:
>>> +               [ thirdIsReceiver
>>> +                       ifTrue:
>>> +                               [ rThird := ReceiverResultReg.  "Free
>>> ReceiverResultReg if it was not free"
>>> +                               (self register: ReceiverResultReg
>>> isInMask: self liveRegisters) ifTrue:
>>> +                                       [ self ssAllocateRequiredReg:
>>> ReceiverResultReg ].
>>> +                               optStatus isReceiverResultRegLive: false
>>> ]
>>> +                       ifFalse: [ rThird := self
>>> allocateRegNotConflictingWith: topRegistersMask ].
>>> +               topRegistersMask := topRegistersMask bitOr: (self
>>> registerMaskFor: rThird) ].
>>> +
>>> +       rTop ifNil: [
>>> +               rTop := self allocateRegNotConflictingWith:
>>> topRegistersMask.
>>> +               topRegistersMask := topRegistersMask bitOr: (self
>>> registerMaskFor: rTop) ].
>>> +
>>> +       rNext ifNil: [ rNext := self allocateRegNotConflictingWith:
>>> topRegistersMask ].
>>> +
>>> +       ^ trinaryBlock value: rTop value: rNext value: rThird
>>> +
>>> +       !
>>>
>>> Item was added:
>>> + ----- Method:
>>> StackToRegisterMappingCogit>>allocateRegForStackTopTwoEntriesInto: (in
>>> category 'simulation stack') -----
>>> + allocateRegForStackTopTwoEntriesInto: binaryBlock
>>> +       <inline: true>
>>> +       | topRegistersMask rTop rNext |
>>> +
>>> +       topRegistersMask := 0.
>>> +
>>> +       self ssTop type = SSRegister ifTrue:
>>> +               [ topRegistersMask := self registerMaskFor: (rTop :=
>>> self ssTop register)].
>>> +       (self ssValue: 1) type = SSRegister ifTrue:
>>> +               [ topRegistersMask := topRegistersMask bitOr: (self
>>> registerMaskFor: (rNext := (self ssValue: 1) register))].
>>> +
>>> +       rTop ifNil: [ rTop := self allocateRegNotConflictingWith:
>>> topRegistersMask ].
>>> +
>>> +       rNext ifNil: [ rNext := self allocateRegNotConflictingWith:
>>> (self registerMaskFor: rTop) ].
>>> +
>>> +       ^ binaryBlock value: rTop value: rNext
>>> +
>>> +       !
>>>
>>> Item was added:
>>> + ----- Method:
>>> StackToRegisterMappingCogit>>allocateRegNotConflictingWith: (in category
>>> 'simulation stack') -----
>>> + allocateRegNotConflictingWith: regMask
>>> +       | reg |
>>> +       "if there's a free register, use it"
>>> +       reg := backEnd availableRegisterOrNilFor: (self liveRegisters
>>> bitOr: regMask).
>>> +       reg ifNil: "No free register, choose one that does not conflict
>>> with regMask"
>>> +               [reg := self freeAnyRegNotConflictingWith: regMask].
>>> +       reg = ReceiverResultReg ifTrue: "If we've allocated
>>> RcvrResultReg, it's not live anymore"
>>> +               [ optStatus isReceiverResultRegLive: false ].
>>> +       ^ reg!
>>>
>>> Item was removed:
>>> - ----- Method:
>>> StackToRegisterMappingCogit>>allocateRegisterNotConflictingWith: (in
>>> category 'simulation stack') -----
>>> - allocateRegisterNotConflictingWith: regMask
>>> -       | reg |
>>> -       "if there's a free register, use it"
>>> -       reg := backEnd availableRegisterOrNilFor: (self liveRegisters
>>> bitOr: regMask).
>>> -       reg ifNil: "No free register, choose one that does not conflict
>>> with regMask"
>>> -               [reg := self freeRegisterNotConflictingWith: regMask].
>>> -       reg = ReceiverResultReg ifTrue: "If we've allocated
>>> RcvrResultReg, it's not live anymore"
>>> -               [ optStatus isReceiverResultRegLive: false ].
>>> -       ^ reg!
>>>
>>> Item was removed:
>>> - ----- Method:
>>> StackToRegisterMappingCogit>>allocateThreeRegistersInto:thirdIsReceiver:
>>> (in category 'simulation stack') -----
>>> - allocateThreeRegistersInto: trinaryBlock thirdIsReceiver:
>>> thirdIsReceiver
>>> -       <inline: true>
>>> -       | topRegistersMask rTop rNext rThird |
>>> -
>>> -       topRegistersMask := 0.
>>> -
>>> -       (self ssTop type = SSRegister and: [ thirdIsReceiver not or: [
>>> self ssTop register ~= ReceiverResultReg ] ]) ifTrue:
>>> -               [ topRegistersMask := self registerMaskFor: (rTop :=
>>> self ssTop register)].
>>> -       ((self ssValue: 1) type = SSRegister and: [ thirdIsReceiver not
>>> or: [ (self ssValue: 1) register ~= ReceiverResultReg ] ]) ifTrue:
>>> -               [ topRegistersMask := topRegistersMask bitOr: (self
>>> registerMaskFor: (rNext := (self ssValue: 1) register))].
>>> -       ((self ssValue: 2) type = SSRegister and: [thirdIsReceiver not
>>> or: [ (self ssValue: 2) register = ReceiverResultReg ] ]) ifTrue:
>>> -               [ topRegistersMask := topRegistersMask bitOr: (self
>>> registerMaskFor: (rThird := (self ssValue: 2) register))].
>>> -
>>> -       rThird ifNil:
>>> -               [ thirdIsReceiver
>>> -                       ifTrue:
>>> -                               [ rThird := ReceiverResultReg.  "Free
>>> ReceiverResultReg if it was not free"
>>> -                               (self register: ReceiverResultReg
>>> isInMask: self liveRegisters) ifTrue:
>>> -                                       [ self ssAllocateRequiredReg:
>>> ReceiverResultReg ].
>>> -                               optStatus isReceiverResultRegLive: false
>>> ]
>>> -                       ifFalse: [ rThird := self
>>> allocateRegisterNotConflictingWith: topRegistersMask ].
>>> -               topRegistersMask := topRegistersMask bitOr: (self
>>> registerMaskFor: rThird) ].
>>> -
>>> -       rTop ifNil: [
>>> -               rTop := self allocateRegisterNotConflictingWith:
>>> topRegistersMask.
>>> -               topRegistersMask := topRegistersMask bitOr: (self
>>> registerMaskFor: rTop) ].
>>> -
>>> -       rNext ifNil: [ rNext := self allocateRegisterNotConflictingWith:
>>> topRegistersMask ].
>>> -
>>> -       ^ trinaryBlock value: rTop value: rNext value: rThird
>>> -
>>> -       !
>>>
>>> Item was removed:
>>> - ----- Method: StackToRegisterMappingCogit>>allocateTwoRegistersInto:
>>> (in category 'simulation stack') -----
>>> - allocateTwoRegistersInto: binaryBlock
>>> -       <inline: true>
>>> -       | topRegistersMask rTop rNext |
>>> -
>>> -       topRegistersMask := 0.
>>> -
>>> -       self ssTop type = SSRegister ifTrue:
>>> -               [ topRegistersMask := self registerMaskFor: (rTop :=
>>> self ssTop register)].
>>> -       (self ssValue: 1) type = SSRegister ifTrue:
>>> -               [ topRegistersMask := topRegistersMask bitOr: (self
>>> registerMaskFor: (rNext := (self ssValue: 1) register))].
>>> -
>>> -       rTop ifNil: [ rTop := self allocateRegisterNotConflictingWith:
>>> topRegistersMask ].
>>> -
>>> -       rNext ifNil: [ rNext := self allocateRegisterNotConflictingWith:
>>> (self registerMaskFor: rTop) ].
>>> -
>>> -       ^ binaryBlock value: rTop value: rNext
>>> -
>>> -       !
>>>
>>> Item was removed:
>>> - ----- Method: StackToRegisterMappingCogit>>availableRegister (in
>>> category 'simulation stack') -----
>>> - availableRegister
>>> -       | reg |
>>> -       reg := self availableRegisterOrNil.
>>> -       reg ifNil: [self error: 'no available register'].
>>> -       ^reg!
>>>
>>> Item was removed:
>>> - ----- Method: StackToRegisterMappingCogit>>availableRegisterOrNil (in
>>> category 'simulation stack') -----
>>> - availableRegisterOrNil
>>> -       ^backEnd availableRegisterOrNilFor: self liveRegisters!
>>>
>>> Item was added:
>>> + ----- Method:
>>> StackToRegisterMappingCogit>>freeAnyRegNotConflictingWith: (in category
>>> 'simulation stack') -----
>>> + freeAnyRegNotConflictingWith: regMask
>>> +       "Spill the closest register on stack not conflicting with
>>> regMask.
>>> +       Assertion Failure if regMask has already all the registers"
>>> +       <var: #desc type: #'CogSimStackEntry *'>
>>> +       | reg index |
>>> +       index := simSpillBase max: 0.
>>> +       [reg isNil and: [index < simStackPtr] ] whileTrue:
>>> +               [ | desc |
>>> +                desc := self simStackAt: index.
>>> +                desc type = SSRegister ifTrue:
>>> +                       [ (regMask anyMask: (self registerMaskFor: desc
>>> register)) ifFalse:
>>> +                               [ reg := desc register ] ].
>>> +                index := index + 1].
>>> +       self assert: reg notNil.
>>> +       self ssAllocateRequiredReg: reg.
>>> +       ^reg!
>>>
>>> Item was removed:
>>> - ----- Method:
>>> StackToRegisterMappingCogit>>freeRegisterNotConflictingWith: (in category
>>> 'simulation stack') -----
>>> - freeRegisterNotConflictingWith: regMask
>>> -       "Spill the closest register on stack not conflicting with
>>> regMask.
>>> -       Assertion Failure if regMask has already all the registers"
>>> -       <var: #desc type: #'CogSimStackEntry *'>
>>> -       | reg index |
>>> -       index := simSpillBase max: 0.
>>> -       [reg isNil and: [index < simStackPtr] ] whileTrue:
>>> -               [ | desc |
>>> -                desc := self simStackAt: index.
>>> -                desc type = SSRegister ifTrue:
>>> -                       [ (regMask anyMask: (self registerMaskFor: desc
>>> register)) ifFalse:
>>> -                               [ reg := desc register ] ].
>>> -                index := index + 1].
>>> -       self assert: reg notNil.
>>> -       self ssAllocateRequiredReg: reg.
>>> -       ^reg!
>>>
>>> Item was changed:
>>>   ----- Method:
>>> StackToRegisterMappingCogit>>genBinaryConstOpVarInlinePrimitive: (in
>>> category 'inline primitive generators') -----
>>>   genBinaryConstOpVarInlinePrimitive: prim
>>>         "Const op var version of binary inline primitives."
>>>         "SistaV1: 248           11111000        iiiiiiii
>>> mjjjjjjj                Call Primitive #iiiiiiii + (jjjjjjj * 256) m=1
>>> means inlined primitive, no hard return after execution.
>>>          See EncoderForSistaV1's class comment and
>>> StackInterpreter>>#binaryInlinePrimitive:"
>>>         | ra val untaggedVal adjust |
>>> +       ra := self allocateRegForStackTopEntry.
>>> -       ra := self allocateOneRegister.
>>>         self ssTop popToReg: ra.
>>>         self ssPop: 1.
>>>         val := self ssTop constant.
>>>         self ssPop: 1.
>>>         untaggedVal := val - objectMemory smallIntegerTag.
>>>         prim caseOf: {
>>>                 "0 through 6, +, -, *, /, //, \\, quo:, SmallInteger op
>>> SmallInteger => SmallInteger, no overflow"
>>>                 [0]     ->      [self AddCq: untaggedVal R: ra].
>>>                 [1]     ->      [self MoveCq: val R: TempReg.
>>>                                  self SubR: ra R: TempReg.
>>>                                  objectRepresentation
>>> genAddSmallIntegerTagsTo: TempReg.
>>>                                  self MoveR: TempReg R: ra].
>>>                 [2]     ->      [objectRepresentation
>>> genRemoveSmallIntegerTagsInScratchReg: ra.
>>>                                  self MoveCq: (objectMemory
>>> integerValueOf: val) R: TempReg.
>>>                                  self MulR: TempReg R: ra.
>>>                                  objectRepresentation
>>> genAddSmallIntegerTagsTo: ra].
>>>
>>>                 "2016 through 2019, bitAnd:, bitOr:, bitXor, bitShift:,
>>> SmallInteger op SmallInteger => SmallInteger, no overflow"
>>>
>>>                 "2032   through 2037, >, <, >=, <=. =, ~=, SmallInteger
>>> op SmallInteger => Boolean (flags?? then in jump bytecodes if ssTop is a
>>> flags value, just generate the instruction!!!!)"
>>>                 "CmpCqR is SubRCq so everything is reversed, but because
>>> no CmpRCq things are reversed again and we invert the sense of the jumps."
>>>                 [32] -> [ self CmpCq: val R: ra.
>>>                                 self genBinaryInlineComparison: JumpLess
>>> opFalse: JumpGreaterOrEqual destReg: ra ].
>>>                 [33] -> [ self CmpCq: val R: ra.
>>>                                 self genBinaryInlineComparison:
>>> JumpGreater opFalse: JumpLessOrEqual destReg: ra ].
>>>                 [34] -> [ self CmpCq: val R: ra.
>>>                                 self genBinaryInlineComparison:
>>> JumpLessOrEqual opFalse: JumpGreater destReg: ra ].
>>>                 [35] -> [ self CmpCq: val R: ra.
>>>                                 self genBinaryInlineComparison:
>>> JumpGreaterOrEqual opFalse: JumpLess destReg: ra ].
>>>                 [36] -> [ self CmpCq: val R: ra.
>>>                                 self genBinaryInlineComparison: JumpZero
>>> opFalse: JumpNonZero destReg: ra ].
>>>                 [37] -> [ self CmpCq: val R: ra.
>>>                                 self genBinaryInlineComparison:
>>> JumpNonZero opFalse: JumpZero destReg: ra ].
>>>
>>>                 "2064   through 2068, Pointer Object>>at:, Byte
>>> Object>>at:, Short16 Word Object>>at: LongWord32 Object>>at: Quad64Word
>>> Object>>at:. obj op 0-rel SmallInteger => oop"
>>>                 [64] -> [objectRepresentation
>>> genConvertSmallIntegerToIntegerInReg: ra.
>>>                                 adjust := (objectMemory baseHeaderSize
>>> >> objectMemory shiftForWord) - 1. "shift by baseHeaderSize and then move
>>> from 1 relative to zero relative"
>>>                                 adjust ~= 0 ifTrue: [ self AddCq: adjust
>>> R: ra. ].
>>>                                 self genMoveConstant: val R: TempReg.
>>>                                 self MoveXwr: ra R: TempReg R: ra].
>>>                 [65] -> [objectRepresentation
>>> genConvertSmallIntegerToIntegerInReg: ra.
>>>                                 adjust := objectMemory baseHeaderSize -
>>> 1. "shift by baseHeaderSize and then move from 1 relative to zero relative"
>>>                                 self AddCq: adjust R: ra.
>>>                                 self genMoveConstant: val R: TempReg.
>>>                                 self MoveXbr: ra R: TempReg R: ra.
>>>                                 objectRepresentation
>>> genConvertIntegerToSmallIntegerInReg: ra]
>>>         }
>>>         otherwise: [^EncounteredUnknownBytecode].
>>>         self ssPushRegister: ra.
>>>         ^0!
>>>
>>> Item was changed:
>>>   ----- Method:
>>> StackToRegisterMappingCogit>>genBinaryVarOpConstInlinePrimitive: (in
>>> category 'inline primitive generators') -----
>>>   genBinaryVarOpConstInlinePrimitive: prim
>>>         "Var op const version of inline binary inline primitives."
>>>         "SistaV1: 248           11111000        iiiiiiii
>>> mjjjjjjj                Call Primitive #iiiiiiii + (jjjjjjj * 256) m=1
>>> means inlined primitive, no hard return after execution.
>>>          See EncoderForSistaV1's class comment and
>>> StackInterpreter>>#binaryInlinePrimitive:"
>>>         | rr val untaggedVal |
>>>         val := self ssTop constant.
>>>         self ssPop: 1.
>>> +       rr := self allocateRegForStackTopEntry.
>>> -       rr := self allocateOneRegister.
>>>         self ssTop popToReg: rr.
>>>         self ssPop: 1.
>>>         untaggedVal := val - objectMemory smallIntegerTag.
>>>         prim caseOf: {
>>>                 "0 through 6, +, -, *, /, //, \\, quo:, SmallInteger op
>>> SmallInteger => SmallInteger, no overflow"
>>>                 [0]     ->      [self AddCq: untaggedVal R: rr].
>>>                 [1]     ->      [self SubCq: untaggedVal R: rr ].
>>>                 [2]     ->      [self flag: 'could use MulCq:R'.
>>>                                  objectRepresentation
>>> genShiftAwaySmallIntegerTagsInScratchReg: rr.
>>>                                  self MoveCq: (objectMemory
>>> integerValueOf: val) R: TempReg.
>>>                                  self MulR: TempReg R: rr.
>>>                                  objectRepresentation
>>> genAddSmallIntegerTagsTo: rr].
>>>
>>>                 "2016 through 2019, bitAnd:, bitOr:, bitXor, bitShift:,
>>> SmallInteger op SmallInteger => SmallInteger, no overflow"
>>>
>>>                 "2032   through 2037, >, <, >=, <=. =, ~=, SmallInteger
>>> op SmallInteger => Boolean (flags?? then in jump bytecodes if ssTop is a
>>> flags value, just generate the instruction!!!!)"
>>>                 "CmpCqR is SubRCq so everything is reversed."
>>>                 [32] -> [ self CmpCq: val R: rr.
>>>                                 self genBinaryInlineComparison:
>>> JumpGreater opFalse: JumpLessOrEqual destReg: rr ].
>>>                 [33] -> [ self CmpCq: val R: rr.
>>>                                 self genBinaryInlineComparison: JumpLess
>>> opFalse: JumpGreaterOrEqual destReg: rr ].
>>>                 [34] -> [ self CmpCq: val R: rr.
>>>                                 self genBinaryInlineComparison:
>>> JumpGreaterOrEqual opFalse: JumpLess destReg: rr ].
>>>                 [35] -> [ self CmpCq: val R: rr.
>>>                                 self genBinaryInlineComparison:
>>> JumpLessOrEqual opFalse: JumpGreater destReg: rr ].
>>>                 [36] -> [ self CmpCq: val R: rr.
>>>                                 self genBinaryInlineComparison: JumpZero
>>> opFalse: JumpNonZero destReg: rr ].
>>>                 [37] -> [ self CmpCq: val R: rr.
>>>                                 self genBinaryInlineComparison:
>>> JumpNonZero opFalse: JumpZero destReg: rr ].
>>>
>>>                 "2064   through 2068, Pointer Object>>at:, Byte
>>> Object>>at:, Short16 Word Object>>at: LongWord32 Object>>at: Quad64Word
>>> Object>>at:. obj op 0-rel SmallInteger => oop"
>>>                 [64] -> [objectRepresentation genLoadSlot: (objectMemory
>>> integerValueOf: val) - 1 sourceReg: rr destReg: rr].
>>>                 [65] -> [self MoveCq: (objectMemory integerValueOf: val)
>>> + objectMemory baseHeaderSize - 1 R: TempReg.
>>>                                 self MoveXbr: TempReg R: rr R: rr.
>>>                                 objectRepresentation
>>> genConvertIntegerToSmallIntegerInReg: rr]
>>>
>>>         }
>>>         otherwise: [^EncounteredUnknownBytecode].
>>>         self ssPushRegister: rr.
>>>         ^0!
>>>
>>> Item was changed:
>>>   ----- Method:
>>> StackToRegisterMappingCogit>>genBinaryVarOpVarInlinePrimitive: (in category
>>> 'inline primitive generators') -----
>>>   genBinaryVarOpVarInlinePrimitive: prim
>>>         "Var op var version of binary inline primitives."
>>>         "SistaV1: 248           11111000        iiiiiiii
>>> mjjjjjjj                Call Primitive #iiiiiiii + (jjjjjjj * 256) m=1
>>> means inlined primitive, no hard return after execution.
>>>          See EncoderForSistaV1's class comment and
>>> StackInterpreter>>#binaryInlinePrimitive:"
>>>         | ra rr adjust |
>>> +       self allocateRegForStackTopTwoEntriesInto: [:rTop :rNext | ra :=
>>> rTop. rr := rNext ].
>>> -       self allocateTwoRegistersInto: [:rTop :rNext | ra := rTop. rr :=
>>> rNext ].
>>>         self ssTop popToReg: ra.
>>>         self ssPop: 1.
>>>         self ssTop popToReg: rr.
>>>         self ssPop: 1.
>>>         prim caseOf: {
>>>                 "0 through 6, +, -, *, /, //, \\, quo:, SmallInteger op
>>> SmallInteger => SmallInteger, no overflow"
>>>                 [0]     ->      [objectRepresentation
>>> genRemoveSmallIntegerTagsInScratchReg: ra.
>>>                                  self AddR: ra R: rr].
>>>                 [1]     ->      [self SubR: ra R: rr.
>>>                                  objectRepresentation
>>> genAddSmallIntegerTagsTo: rr].
>>>                 [2]     ->      [objectRepresentation
>>> genRemoveSmallIntegerTagsInScratchReg: rr.
>>>                                  objectRepresentation
>>> genShiftAwaySmallIntegerTagsInScratchReg: ra.
>>>                                  self MulR: ra R: rr.
>>>                                  objectRepresentation
>>> genAddSmallIntegerTagsTo: rr].
>>>
>>>                 "2016 through 2019, bitAnd:, bitOr:, bitXor, bitShift:,
>>> SmallInteger op SmallInteger => SmallInteger, no overflow"
>>>
>>>                 "2032   through 2037, >, <, >=, <=. =, ~=, SmallInteger
>>> op SmallInteger => Boolean (flags?? then in jump bytecodes if ssTop is a
>>> flags value, just generate the instruction!!!!)"
>>>                 "CmpCqR is SubRCq so everything is reversed."
>>>                 [32] -> [ self CmpR: ra R: rr.
>>>                                 self genBinaryInlineComparison:
>>> JumpGreater opFalse: JumpLessOrEqual destReg: rr ].
>>>                 [33] -> [ self CmpR: ra R: rr.
>>>                                 self genBinaryInlineComparison: JumpLess
>>> opFalse: JumpGreaterOrEqual destReg: rr ].
>>>                 [34] -> [ self CmpR: ra R: rr.
>>>                                 self genBinaryInlineComparison:
>>> JumpGreaterOrEqual opFalse: JumpLess destReg: rr ].
>>>                 [35] -> [ self CmpR: ra R: rr.
>>>                                 self genBinaryInlineComparison:
>>> JumpLessOrEqual opFalse: JumpGreater destReg: rr ].
>>>                 [36] -> [ self CmpR: ra R: rr.
>>>                                 self genBinaryInlineComparison: JumpZero
>>> opFalse: JumpNonZero destReg: rr ].
>>>                 [37] -> [ self CmpR: ra R: rr.
>>>                                 self genBinaryInlineComparison:
>>> JumpNonZero opFalse: JumpZero destReg: rr ].
>>>
>>>                 "2064   through 2068, Pointer Object>>at:, Byte
>>> Object>>at:, Short16 Word Object>>at: LongWord32 Object>>at: Quad64Word
>>> Object>>at:. obj op 0-rel SmallInteger => oop"
>>>                 [64] -> [objectRepresentation
>>> genConvertSmallIntegerToIntegerInReg: ra.
>>>                                 adjust := (objectMemory baseHeaderSize
>>> >> objectMemory shiftForWord) - 1. "shift by baseHeaderSize and then move
>>> from 1 relative to zero relative"
>>>                                 adjust ~= 0 ifTrue: [ self AddCq: adjust
>>> R: ra. ].
>>>                                 self MoveXwr: ra R: rr R: rr ].
>>>                 [65] -> [objectRepresentation
>>> genConvertSmallIntegerToIntegerInReg: ra.
>>>                                 adjust := objectMemory baseHeaderSize -
>>> 1. "shift by baseHeaderSize and then move from 1 relative to zero relative"
>>>                                 self AddCq: adjust R: ra.
>>>                                 self MoveXbr: ra R: rr R: rr.
>>>                                 objectRepresentation
>>> genConvertIntegerToSmallIntegerInReg: rr]
>>>
>>>         }
>>>         otherwise: [^EncounteredUnknownBytecode].
>>>         self ssPushRegister: rr.
>>>         ^0!
>>>
>>> Item was changed:
>>>   ----- Method: StackToRegisterMappingCogit>>genPushLiteralVariable: (in
>>> category 'bytecode generator support') -----
>>>   genPushLiteralVariable: literalIndex
>>>         <inline: false>
>>>         | association freeReg |
>>> +       freeReg := self allocateAnyReg.
>>> -       freeReg := self ssAllocatePreferredReg: ClassReg.
>>>         association := self getLiteral: literalIndex.
>>>         "N.B. Do _not_ use ReceiverResultReg to avoid overwriting
>>> receiver in assignment in frameless methods."
>>>         "So far descriptors are not rich enough to describe the entire
>>> dereference so generate the register
>>>          load but don't push the result.  There is an
>>> order-of-evaluation issue if we defer the dereference."
>>>         self genMoveConstant: association R: TempReg.
>>>         objectRepresentation
>>>                 genEnsureObjInRegNotForwarded: TempReg
>>>                 scratchReg: freeReg.
>>>         objectRepresentation
>>>                 genLoadSlot: ValueIndex
>>>                 sourceReg: TempReg
>>>                 destReg: freeReg.
>>>         self ssPushRegister: freeReg.
>>>         ^0!
>>>
>>> Item was changed:
>>>   ----- Method:
>>> StackToRegisterMappingCogit>>genSpecialSelectorEqualsEqualsWithForwarders
>>> (in category 'bytecode generators') -----
>>>   genSpecialSelectorEqualsEqualsWithForwarders
>>>         | primDescriptor nextPC nExts branchDescriptor unforwardRcvr
>>> argReg targetBytecodePC
>>>         unforwardArg  rcvrReg jumpNotEqual jumpEqual postBranchPC label
>>> fixup |
>>>         <var: #fixup type: #'BytecodeFixup *'>
>>>         <var: #jumpEqual type: #'AbstractInstruction *'>
>>>         <var: #jumpNotEqual type: #'AbstractInstruction *'>
>>>         <var: #primDescriptor type: #'BytecodeDescriptor *'>
>>>         <var: #branchDescriptor type: #'BytecodeDescriptor *'>
>>>
>>>         primDescriptor := self generatorAt: byte0.
>>>
>>>         nextPC := bytecodePC + primDescriptor numBytes.
>>>         nExts := 0.
>>>         [branchDescriptor := self generatorAt: (objectMemory fetchByte:
>>> nextPC ofObject: methodObj) + bytecodeSetOffset.
>>>          branchDescriptor isExtension] whileTrue:
>>>                 [nExts := nExts + 1.
>>>                  nextPC := nextPC + branchDescriptor numBytes].
>>>         "If branching the stack must be flushed for the merge"
>>>         (branchDescriptor isBranchTrue or: [branchDescriptor
>>> isBranchFalse]) ifTrue:
>>>                 [self ssFlushTo: simStackPtr - 2].
>>>
>>>         unforwardRcvr := (objectRepresentation isUnannotatableConstant:
>>> (self ssValue: 1)) not.
>>>         unforwardArg := (objectRepresentation isUnannotatableConstant:
>>> self ssTop) not.
>>>
>>>         "if the rcvr or the arg is an annotable constant, we need to
>>> push it to a register
>>>         else the forwarder check can't jump back to the comparison after
>>> unforwarding the constant"
>>>         unforwardArg
>>>                 ifTrue:
>>>                         [unforwardRcvr
>>>                                 ifTrue:
>>> +                                       [self
>>> allocateRegForStackTopTwoEntriesInto: [:rTop :rNext| argReg := rTop.
>>> rcvrReg := rNext].
>>> -                                       [self allocateTwoRegistersInto:
>>> [:rTop :rNext| argReg := rTop. rcvrReg := rNext].
>>>                                          self ssTop popToReg: argReg.
>>>                                          (self ssValue:1) popToReg:
>>> rcvrReg]
>>>                                 ifFalse:
>>> +                                       [argReg := self
>>> allocateRegForStackTopEntry.
>>> -                                       [argReg := self
>>> allocateOneRegister.
>>>                                          self ssTop popToReg: argReg]]
>>>                 ifFalse:
>>>                         [self assert: unforwardRcvr.
>>> +                        rcvrReg := self allocateRegForStackEntryAt: 1.
>>> -                        rcvrReg := self allocateOneRegister.
>>>                          (self ssValue:1) popToReg: rcvrReg].
>>>
>>>         label := self Label.
>>>
>>>         "Here we can use Cq because the constant does not need to be
>>> annotated"
>>>         self assert: (unforwardArg not or: [argReg notNil]).
>>>         self assert: (unforwardRcvr not or: [rcvrReg notNil]).
>>>         unforwardArg
>>>                 ifFalse: [ self CmpCq: self ssTop constant R: rcvrReg ]
>>>                 ifTrue: [ unforwardRcvr
>>>                         ifFalse: [ self CmpCq: (self ssValue: 1)
>>> constant R: argReg ]
>>>                         ifTrue: [ self CmpR: argReg R: rcvrReg ] ].
>>>
>>>         self ssPop: 2.
>>>
>>>         "If not followed by a branch, resolve to true or false."
>>>         (branchDescriptor isBranchTrue or: [branchDescriptor
>>> isBranchFalse]) ifFalse:
>>>                 [jumpEqual := self JumpZero: 0.
>>>                  unforwardArg ifTrue: [ objectRepresentation
>>> genEnsureOopInRegNotForwarded: argReg scratchReg: TempReg jumpBackTo: label
>>> ].
>>>                  unforwardRcvr ifTrue: [ objectRepresentation
>>> genEnsureOopInRegNotForwarded: rcvrReg scratchReg: TempReg jumpBackTo:
>>> label ].
>>>                  self genMoveFalseR: rcvrReg.
>>>                  jumpNotEqual := self Jump: 0.
>>>                  jumpEqual jmpTarget: (self genMoveTrueR: rcvrReg).
>>>                  jumpNotEqual jmpTarget: self Label.
>>>                  self ssPushRegister: rcvrReg.
>>>                  ^0].
>>>
>>>         "Further since there is a following conditional jump bytecode,
>>> define
>>>          non-merge fixups and leave the cond bytecode to set the
>>> mergeness."
>>>         targetBytecodePC := nextPC
>>>                                                         +
>>> branchDescriptor numBytes
>>>                                                         + (self spanFor:
>>> branchDescriptor at: nextPC exts: nExts in: methodObj).
>>>         postBranchPC := nextPC + branchDescriptor numBytes.
>>>         (self fixupAt: nextPC - initialPC) targetInstruction = 0
>>>                 ifTrue: "The next instruction is dead.  we can skip it."
>>>                         [deadCode := true.
>>>                          self ensureFixupAt: targetBytecodePC -
>>> initialPC.
>>>                          self ensureFixupAt: postBranchPC - initialPC]
>>>                 ifFalse:
>>>                         [self ssPushConstant: objectMemory trueObject].
>>> "dummy value"
>>>
>>>         self assert: (unforwardArg or: [ unforwardRcvr ]).
>>>         branchDescriptor isBranchTrue ifTrue:
>>>                 [ deadCode ifFalse: [ fixup := self
>>> ensureNonMergeFixupAt: postBranchPC - initialPC ].
>>>                 self JumpZero:  (self ensureNonMergeFixupAt:
>>> targetBytecodePC - initialPC) asUnsignedInteger.
>>>                 unforwardArg ifTrue: [ (deadCode or: [ unforwardRcvr ])
>>>                         ifTrue: [ objectRepresentation
>>> genEnsureOopInRegNotForwarded: argReg scratchReg: TempReg jumpBackTo: label
>>> ]
>>>                         ifFalse: [ objectRepresentation
>>>                                 genEnsureOopInRegNotForwarded: argReg
>>>                                 scratchReg: TempReg
>>>                                 ifForwarder: label
>>>                                 ifNotForwarder: fixup ] ].
>>>                 unforwardRcvr ifTrue: [ deadCode
>>>                         ifTrue: [objectRepresentation
>>> genEnsureOopInRegNotForwarded: rcvrReg scratchReg: TempReg jumpBackTo:
>>> label ]
>>>                         ifFalse: [objectRepresentation
>>>                                 genEnsureOopInRegNotForwarded: rcvrReg
>>>                                 scratchReg: TempReg
>>>                                 ifForwarder: label
>>>                                 ifNotForwarder: fixup ] ] ].
>>>         branchDescriptor isBranchFalse ifTrue:
>>>                 [ fixup := self ensureNonMergeFixupAt: targetBytecodePC
>>> - initialPC.
>>>                 self JumpZero: (self ensureNonMergeFixupAt: postBranchPC
>>> - initialPC) asUnsignedInteger.
>>>                 unforwardArg ifTrue: [ unforwardRcvr
>>>                         ifFalse: [objectRepresentation
>>>                                 genEnsureOopInRegNotForwarded: argReg
>>>                                 scratchReg: TempReg
>>>                                 ifForwarder: label
>>>                                 ifNotForwarder: fixup ]
>>>                         ifTrue: [ objectRepresentation
>>> genEnsureOopInRegNotForwarded: argReg scratchReg: TempReg jumpBackTo: label
>>> ] ].
>>>                 unforwardRcvr ifTrue:
>>>                         [ objectRepresentation
>>>                                 genEnsureOopInRegNotForwarded: rcvrReg
>>>                                 scratchReg: TempReg
>>>                                 ifForwarder: label
>>>                                 ifNotForwarder: fixup ].
>>>                 "Not reached"].
>>>         ^0!
>>>
>>> Item was changed:
>>>   ----- Method: StackToRegisterMappingCogit>>genTrinaryInlinePrimitive:
>>> (in category 'inline primitive generators') -----
>>>   genTrinaryInlinePrimitive: prim
>>>         "Unary inline primitives."
>>>         "SistaV1: 248           11111000        iiiiiiii
>>> mjjjjjjj                Call Primitive #iiiiiiii + (jjjjjjj * 256) m=1
>>> means inlined primitive, no hard return after execution.
>>>          See EncoderForSistaV1's class comment and
>>> StackInterpreter>>#trinaryInlinePrimitive:"
>>>
>>>         | ra1 ra2 rr adjust |
>>>         "The store check requires rr to be ReceiverResultReg"
>>> +       self
>>> +               allocateRegForStackTopThreeEntriesInto: [:rTop :rNext
>>> :rThird | ra2 := rTop. ra1 := rNext. rr := rThird ]
>>> +               thirdIsReceiver: prim = 0.
>>> -       self allocateThreeRegistersInto: [:rTop :rNext :rThird | ra2 :=
>>> rTop. ra1 := rNext. rr := rThird ] thirdIsReceiver: prim = 0.
>>>         self assert: (rr ~= ra1 and: [rr ~= ra2 and: [ra1 ~= ra2]]).
>>>         self ssTop popToReg: ra2.
>>>         self ssPop: 1.
>>>         self ssTop popToReg: ra1.
>>>         self ssPop: 1.
>>>         self ssTop popToReg: rr.
>>>         self ssPop: 1.
>>>         objectRepresentation genConvertSmallIntegerToIntegerInReg: ra1.
>>>         "Now: ra is the variable object, rr is long, TempReg holds the
>>> value to store."
>>>         prim caseOf: {
>>>                 "0 - 1 pointerAt:put: and byteAt:Put:"
>>>                 [0] ->  [ adjust := (objectMemory baseHeaderSize >>
>>> objectMemory shiftForWord) - 1. "shift by baseHeaderSize and then move from
>>> 1 relative to zero relative"
>>>                                 adjust ~= 0 ifTrue: [ self AddCq: adjust
>>> R: ra1. ].
>>>                                 self MoveR: ra2 Xwr: ra1 R: rr.
>>>                                 objectRepresentation
>>> genStoreCheckReceiverReg: rr valueReg: ra2 scratchReg: TempReg inFrame:
>>> true].
>>>                 [1] ->  [ objectRepresentation
>>> genConvertSmallIntegerToIntegerInReg: ra2.
>>>                                 adjust := objectMemory baseHeaderSize -
>>> 1. "shift by baseHeaderSize and then move from 1 relative to zero relative"
>>>                                 self AddCq: adjust R: ra1.
>>>                                 self MoveR: ra2 Xbr: ra1 R: rr.
>>>                                 objectRepresentation
>>> genConvertIntegerToSmallIntegerInReg: ra2. ]
>>>         }
>>>         otherwise: [^EncounteredUnknownBytecode].
>>>         self ssPushRegister: ra2.
>>>         ^0!
>>>
>>> Item was changed:
>>>   ----- Method: StackToRegisterMappingCogit>>genUnaryInlinePrimitive:
>>> (in category 'inline primitive generators') -----
>>>   genUnaryInlinePrimitive: prim
>>>         "Unary inline primitives."
>>>         "SistaV1: 248           11111000        iiiiiiii
>>> mjjjjjjj                Call Primitive #iiiiiiii + (jjjjjjj * 256) m=1
>>> means inlined primitive, no hard return after execution.
>>>          See EncoderForSistaV1's class comment and
>>> StackInterpreter>>#unaryInlinePrimitive:"
>>>         | rcvrReg resultReg |
>>> +       rcvrReg := self allocateRegForStackTopEntry.
>>> +       resultReg := self allocateRegNotConflictingWith: (self
>>> registerMaskFor: rcvrReg).
>>> -       rcvrReg := self allocateOneRegister.
>>> -       resultReg := self allocateRegisterNotConflictingWith: (self
>>> registerMaskFor: rcvrReg).
>>>         self ssTop popToReg: rcvrReg.
>>>         self ssPop: 1.
>>>         prim
>>>                 caseOf: {
>>>                                         "00             unchecked class"
>>>                         [1] ->  "01             unchecked pointer
>>> numSlots"
>>>                                 [objectRepresentation
>>>                                         genGetNumSlotsOf: rcvrReg into:
>>> resultReg;
>>>
>>> genConvertIntegerToSmallIntegerInReg: resultReg].
>>>                                         "02             unchecked
>>> pointer basicSize"
>>>                         [3] ->  "03             unchecked byte numBytes"
>>>                                 [objectRepresentation
>>>                                         genGetNumBytesOf: rcvrReg into:
>>> resultReg;
>>>
>>> genConvertIntegerToSmallIntegerInReg: resultReg].
>>>                                         "04             unchecked
>>> short16Type format numShorts"
>>>                                         "05             unchecked
>>> word32Type format numWords"
>>>                                         "06             unchecked
>>> doubleWord64Type format numDoubleWords"
>>>                                   }
>>>                 otherwise:
>>>                         [^EncounteredUnknownBytecode]..
>>>         self ssPushRegister: resultReg.
>>>         ^0!
>>>
>>> Item was changed:
>>>   ----- Method:
>>> StackToRegisterMappingCogit>>genVanillaSpecialSelectorEqualsEquals (in
>>> category 'bytecode generators') -----
>>>   genVanillaSpecialSelectorEqualsEquals
>>>         | nextPC postBranchPC targetBytecodePC primDescriptor
>>> branchDescriptor nExts
>>>           jumpEqual jumpNotEqual rcvrReg argReg argIsConstant
>>> rcvrIsConstant  |
>>>         <var: #jumpEqual type: #'AbstractInstruction *'>
>>>         <var: #jumpNotEqual type: #'AbstractInstruction *'>
>>>         <var: #primDescriptor type: #'BytecodeDescriptor *'>
>>>         <var: #branchDescriptor type: #'BytecodeDescriptor *'>
>>>         primDescriptor := self generatorAt: byte0.
>>>
>>>         nextPC := bytecodePC + primDescriptor numBytes.
>>>         nExts := 0.
>>>         [branchDescriptor := self generatorAt: (objectMemory fetchByte:
>>> nextPC ofObject: methodObj) + bytecodeSetOffset.
>>>          branchDescriptor isExtension] whileTrue:
>>>                 [nExts := nExts + 1.
>>>                  nextPC := nextPC + branchDescriptor numBytes].
>>>         "If branching the stack must be flushed for the merge"
>>>         (branchDescriptor isBranchTrue or: [branchDescriptor
>>> isBranchFalse]) ifTrue:
>>>                 [self ssFlushTo: simStackPtr - 2].
>>>
>>>         "Don't use ReceiverResultReg for receiver to keep
>>> ReceiverResultReg live.
>>>          Optimize e.g. rcvr == nil, the common case for ifNil: et al."
>>>
>>>         argIsConstant := self ssTop type = SSConstant.
>>>         rcvrIsConstant := argIsConstant and: [ (self ssValue:1) type =
>>> SSConstant ].
>>>
>>>         argIsConstant
>>>                 ifFalse:
>>>                         [rcvrIsConstant
>>>                                 ifFalse:
>>> +                                       [self
>>> allocateRegForStackTopTwoEntriesInto: [:rTop :rNext| argReg := rTop.
>>> rcvrReg := rNext].
>>> -                                       [self allocateTwoRegistersInto:
>>> [:rTop :rNext| argReg := rTop. rcvrReg := rNext].
>>>                                          self ssTop popToReg: argReg.
>>>                                          (self ssValue:1) popToReg:
>>> rcvrReg]
>>>                                 ifTrue:
>>> +                                       [argReg := self
>>> allocateRegForStackTopEntry.
>>> -                                       [argReg := self
>>> allocateOneRegister.
>>>                                          self ssTop popToReg: argReg]]
>>>                 ifTrue:
>>>                         [self assert: rcvrIsConstant not.
>>> +                        rcvrReg := self allocateRegForStackEntryAt: 1.
>>> -                        rcvrReg := self allocateOneRegister.
>>>                          (self ssValue:1) popToReg: rcvrReg].
>>>
>>>         argIsConstant
>>>                 ifTrue: [ self genCompConstant: self ssTop constant R:
>>> rcvrReg ]
>>>                 ifFalse: [ rcvrIsConstant
>>>                         ifTrue: [ self genCompConstant: (self ssValue:
>>> 1) constant R: argReg ]
>>>                         ifFalse: [ self CmpR: argReg R: rcvrReg ] ].
>>>
>>>         self ssPop: 2.
>>>
>>>         "If not followed by a branch, resolve to true or false."
>>>         (branchDescriptor isBranchTrue or: [branchDescriptor
>>> isBranchFalse]) ifFalse:
>>>                 [jumpNotEqual := self JumpNonZero: 0.
>>>                  self genMoveTrueR: rcvrReg.
>>>                  jumpEqual := self Jump: 0.
>>>                  jumpNotEqual jmpTarget: (self genMoveFalseR: rcvrReg).
>>>                  jumpEqual jmpTarget: self Label.
>>>                  self ssPushRegister: rcvrReg.
>>>                  ^0].
>>>
>>>         "Further since there is a following conditional jump bytecode,
>>> define
>>>          non-merge fixups and leave the cond bytecode to set the
>>> mergeness."
>>>         targetBytecodePC := nextPC
>>>                                                         +
>>> branchDescriptor numBytes
>>>                                                         + (self spanFor:
>>> branchDescriptor at: nextPC exts: nExts in: methodObj).
>>>         postBranchPC := nextPC + branchDescriptor numBytes.
>>>         (self fixupAt: nextPC - initialPC) targetInstruction = 0
>>>                 ifTrue: "The next instruction is dead.  we can skip it."
>>>                         [deadCode := true.
>>>                          self ensureFixupAt: targetBytecodePC -
>>> initialPC.
>>>                          self ensureFixupAt: postBranchPC - initialPC]
>>>                 ifFalse:
>>>                         [self ssPushConstant: objectMemory trueObject].
>>> "dummy value"
>>>         self gen: (branchDescriptor isBranchTrue ifTrue: [JumpZero]
>>> ifFalse: [JumpNonZero])
>>>                 operand: (self ensureNonMergeFixupAt: targetBytecodePC -
>>> initialPC) asUnsignedInteger.
>>>                 deadCode ifFalse: [self Jump: (self
>>> ensureNonMergeFixupAt: postBranchPC - initialPC)].
>>>         ^0!
>>>
>>> Item was removed:
>>> - ----- Method: StackToRegisterMappingCogit>>ssAllocatePreferredReg: (in
>>> category 'simulation stack') -----
>>> - ssAllocatePreferredReg: preferredReg
>>> -       | preferredMask lastPreferred liveRegs |
>>> -       lastPreferred := -1.
>>> -       "compute live regs while noting the last occurrence of
>>> preferredReg.
>>> -        If there are none free we must spill from simSpillBase to last
>>> occurrence."
>>> -       preferredMask := (self registerMaskFor: preferredReg).
>>> -       liveRegs := self registerMaskFor: TempReg and: FPReg and: SPReg.
>>> -       (simSpillBase max: 0) to: simStackPtr do:
>>> -               [:i|
>>> -               liveRegs := liveRegs bitOr: (self simStackAt: i)
>>> registerMask.
>>> -               (liveRegs bitAnd: preferredMask) ~= 0 ifTrue:
>>> -                       [lastPreferred := i]].
>>> -       "If preferredReg is not live we can allocate it."
>>> -       (self register: preferredReg isInMask: liveRegs) ifFalse:
>>> -               [^preferredReg].
>>> -       "If any other is not live we can allocate it."
>>> -       GPRegMin to: GPRegMax do:
>>> -               [:reg|
>>> -               (self register: reg isInMask: liveRegs) ifFalse:
>>> -                       [^reg]].
>>> -       "All live, must spill"
>>> -       self ssFlushTo: lastPreferred.
>>> -       self assert: (self liveRegisters bitAnd: preferredMask) = 0.
>>> -       ^preferredReg!
>>>
>>>
>>
>>
>
>
> --
> best,
> Eliot
>



-- 
best,
Eliot
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.squeakfoundation.org/pipermail/vm-dev/attachments/20150422/9c7e648e/attachment-0001.htm


More information about the Vm-dev mailing list