<div dir="ltr">Eliot here's a good example to stress the register allocation:<div><br></div><div><div>Integer>>#regStress</div><div><span class="" style="white-space:pre">        </span>| t t2 |</div><div><span class="" style="white-space:pre">        </span>t := self yourself.</div><div><span class="" style="white-space:pre">        </span>t2 := self + 1.</div><div><span class="" style="white-space:pre">        </span>^ { t == t2 . t == t2 . t == t2 . t == t2 . t == t2 . t == t2 . t == t2 }</div></div><div><br></div><div>I think the resulting machine code method is beautiful, it needs to spill only when it runs out of registers :-). Of course it makes sense mainly when you use inlined bytecodes instead of only #==.</div><div><br></div><div>Some extra register moves are done because the JIT does remember if a temporary value is currently in a register (It moves each time the temp t to the same register whereas the register value is not changed). Maybe we should add a feature that remembers if temporaries are currently in a register, and if so, when doing push temp, only push the register directly in the simStack, and in case of temporary store or stack flush the register associated with a temp is not valid anymore somehow...<br></div><div><br></div></div><div class="gmail_extra"><br><div class="gmail_quote">2015-04-22 10:08 GMT+02:00 <span dir="ltr"><<a href="mailto:commits@source.squeak.org" target="_blank">commits@source.squeak.org</a>></span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><br>
Eliot Miranda uploaded a new version of VMMaker to project VM Maker:<br>
<a href="http://source.squeak.org/VMMaker/VMMaker.oscog-cb.1236.mcz" target="_blank">http://source.squeak.org/VMMaker/VMMaker.oscog-cb.1236.mcz</a><br>
<br>
==================== Summary ====================<br>
<br>
Name: VMMaker.oscog-cb.1236<br>
Author: cb<br>
Time: 22 April 2015, 10:07:41.028 am<br>
UUID: cf4af270-9dfa-4f2a-b84c-0cdb3c5f4913<br>
Ancestors: VMMaker.oscog-tpr.1235<br>
<br>
changed the names of register allocation methods for more explict names, for instance, allocateOneReg -> allocateRegForStackTop<br>
<br>
Fixed a bug where not the best register was allocated in #== (was allocating a reg based on stackTop value instead of ssValue: 1)<br>
<br>
=============== Diff against VMMaker.oscog-tpr.1235 ===============<br>
<br>
Item was changed:<br>
----- Method: SistaStackToRegisterMappingCogit>>genExtJumpIfNotInstanceOfBehaviorsOrPopBytecode (in category 'bytecode generators') -----<br>
genExtJumpIfNotInstanceOfBehaviorsOrPopBytecode<br>
"SistaV1: * 254 11111110 kkkkkkkk jjjjjjjj branch If Not Instance Of Behavior/Array Of Behavior kkkkkkkk (+ Extend A * 256, where Extend A >= 0) distance jjjjjjjj (+ Extend B * 256, where Extend B >= 0)"<br>
<br>
| reg literal distance targetFixUp |<br>
<br>
+ reg := self allocateRegForStackTopEntry.<br>
- reg := self allocateOneRegister.<br>
self ssTop popToReg: reg.<br>
<br>
literal := self getLiteral: (extA * 256 + byte1).<br>
extA := 0.<br>
distance := extB * 256 + byte2.<br>
extB := 0.<br>
<br>
targetFixUp := (self ensureFixupAt: bytecodePC + 3 + distance - initialPC) asUnsignedInteger.<br>
<br>
(objectMemory isArrayNonImm: literal)<br>
ifTrue: [objectRepresentation branchIf: reg notInstanceOfBehaviors: literal target: targetFixUp]<br>
ifFalse: [objectRepresentation branchIf: reg notInstanceOfBehavior: literal target: targetFixUp].<br>
<br>
self genPopStackBytecode.<br>
<br>
^0!<br>
<br>
Item was changed:<br>
----- Method: SistaStackToRegisterMappingCogit>>genJumpIf:to: (in category 'bytecode generator support') -----<br>
genJumpIf: boolean to: targetBytecodePC<br>
"The heart of performance counting in Sista. Conditional branches are 6 times less<br>
frequent than sends and can provide basic block frequencies (send counters can't).<br>
Each conditional has a 32-bit counter split into an upper 16 bits counting executions<br>
and a lower half counting untaken executions of the branch. Executing the branch<br>
decrements the upper half, tripping if the count goes negative. Not taking the branch<br>
decrements the lower half. N.B. We *do not* eliminate dead branches (true ifTrue:/true ifFalse:)<br>
so that scanning for send and branch data is simplified and that branch data is correct."<br>
<inline: false><br>
| desc ok counterAddress countTripped retry counterReg |<br>
<var: #ok type: #'AbstractInstruction *'><br>
<var: #desc type: #'CogSimStackEntry *'><br>
<var: #retry type: #'AbstractInstruction *'><br>
<var: #countTripped type: #'AbstractInstruction *'><br>
<br>
(coInterpreter isOptimizedMethod: methodObj) ifTrue: [ ^ super genJumpIf: boolean to: targetBytecodePC ].<br>
<br>
self ssFlushTo: simStackPtr - 1.<br>
desc := self ssTop.<br>
self ssPop: 1.<br>
desc popToReg: TempReg.<br>
<br>
+ counterReg := self allocateAnyReg.<br>
- counterReg := self allocateRegisterNotConflictingWith: 0.<br>
counterAddress := counters + ((self sizeof: #sqInt) * counterIndex).<br>
counterIndex := counterIndex + 1.<br>
self flag: 'will need to use MoveAw32:R: if 64 bits'.<br>
self assert: objectMemory wordSize = CounterBytes.<br>
retry := self MoveAw: counterAddress R: counterReg.<br>
self SubCq: 16r10000 R: counterReg. "Count executed"<br>
"Don't write back if we trip; avoids wrapping count back to initial value, and if we trip we don't execute."<br>
countTripped := self JumpCarry: 0.<br>
self MoveR: counterReg Aw: counterAddress. "write back"<br>
<br>
"Cunning trick by LPD. If true and false are contiguous subtract the smaller.<br>
Correct result is either 0 or the distance between them. If result is not 0 or<br>
their distance send mustBeBoolean."<br>
self assert: (objectMemory objectAfter: objectMemory falseObject) = objectMemory trueObject.<br>
self annotate: (self SubCw: boolean R: TempReg) objRef: boolean.<br>
self JumpZero: (self ensureFixupAt: targetBytecodePC - initialPC).<br>
<br>
self SubCq: 1 R: counterReg. "Count untaken"<br>
self MoveR: counterReg Aw: counterAddress. "write back"<br>
<br>
self CmpCq: (boolean == objectMemory falseObject<br>
ifTrue: [objectMemory trueObject - objectMemory falseObject]<br>
ifFalse: [objectMemory falseObject - objectMemory trueObject])<br>
R: TempReg.<br>
ok := self JumpZero: 0.<br>
self MoveCq: 0 R: counterReg. "if counterReg is 0 this is a mustBeBoolean, not a counter trip."<br>
countTripped jmpTarget:<br>
(self CallRT: (boolean == objectMemory falseObject<br>
ifTrue: [ceSendMustBeBooleanAddFalseTrampoline]<br>
ifFalse: [ceSendMustBeBooleanAddTrueTrampoline])).<br>
"If we're in an image which hasn't got the Sista code loaded then the ceCounterTripped:<br>
trampoline will return directly to machine code, returning the boolean. So the code should<br>
jump back to the retry point. The trampoline makes sure that TempReg has been reloaded."<br>
self annotateBytecode: self Label.<br>
self Jump: retry.<br>
ok jmpTarget: self Label.<br>
^0!<br>
<br>
Item was changed:<br>
----- Method: SistaStackToRegisterMappingCogit>>genSpecialSelectorComparison (in category 'bytecode generators') -----<br>
genSpecialSelectorComparison<br>
"Override to count inlined branches if followed by a conditional branch.<br>
We borrow the following conditional branch's counter and when about to<br>
inline the comparison we decrement the counter (without writing it back)<br>
and if it trips simply abort the inlining, falling back to the normal send which<br>
will then continue to the conditional branch which will trip and enter the abort."<br>
| nextPC postBranchPC targetBytecodePC primDescriptor branchDescriptor nExts<br>
rcvrIsInt argIsInt rcvrInt argInt result jumpNotSmallInts inlineCAB annotateInst<br>
counterAddress countTripped counterReg |<br>
<var: #countTripped type: #'AbstractInstruction *'><br>
<var: #primDescriptor type: #'BytecodeDescriptor *'><br>
<var: #jumpNotSmallInts type: #'AbstractInstruction *'><br>
<var: #branchDescriptor type: #'BytecodeDescriptor *'><br>
<br>
(coInterpreter isOptimizedMethod: methodObj) ifTrue: [ ^ super genSpecialSelectorComparison ].<br>
<br>
self ssFlushTo: simStackPtr - 2.<br>
primDescriptor := self generatorAt: byte0.<br>
argIsInt := self ssTop type = SSConstant<br>
and: [objectMemory isIntegerObject: (argInt := self ssTop constant)].<br>
rcvrIsInt := (self ssValue: 1) type = SSConstant<br>
and: [objectMemory isIntegerObject: (rcvrInt := (self ssValue: 1) constant)].<br>
<br>
"short-cut the jump if operands are SmallInteger constants."<br>
(argIsInt and: [rcvrIsInt]) ifTrue:<br>
[self cCode: '' inSmalltalk: "In Simulator ints are unsigned..."<br>
[rcvrInt := objectMemory integerValueOf: rcvrInt.<br>
argInt := objectMemory integerValueOf: argInt].<br>
primDescriptor opcode caseOf: {<br>
[JumpLess] -> [result := rcvrInt < argInt].<br>
[JumpLessOrEqual] -> [result := rcvrInt <= argInt].<br>
[JumpGreater] -> [result := rcvrInt > argInt].<br>
[JumpGreaterOrEqual] -> [result := rcvrInt >= argInt].<br>
[JumpZero] -> [result := rcvrInt = argInt].<br>
[JumpNonZero] -> [result := rcvrInt ~= argInt] }.<br>
"Must enter any annotatedConstants into the map"<br>
self annotateBytecodeIfAnnotated: (self ssValue: 1).<br>
self annotateBytecodeIfAnnotated: self ssTop.<br>
"Must annotate the bytecode for correct pc mapping."<br>
self ssPop: 2.<br>
^self ssPushAnnotatedConstant: (result<br>
ifTrue: [objectMemory trueObject]<br>
ifFalse: [objectMemory falseObject])].<br>
<br>
nextPC := bytecodePC + primDescriptor numBytes.<br>
nExts := 0.<br>
[branchDescriptor := self generatorAt: (objectMemory fetchByte: nextPC ofObject: methodObj) + (byte0 bitAnd: 256).<br>
branchDescriptor isExtension] whileTrue:<br>
[nExts := nExts + 1.<br>
nextPC := nextPC + branchDescriptor numBytes].<br>
"Only interested in inlining if followed by a conditional branch."<br>
inlineCAB := branchDescriptor isBranchTrue or: [branchDescriptor isBranchFalse].<br>
"Further, only interested in inlining = and ~= if there's a SmallInteger constant involved.<br>
The relational operators successfully statically predict SmallIntegers; the equality operators do not."<br>
(inlineCAB and: [primDescriptor opcode = JumpZero or: [primDescriptor opcode = JumpNonZero]]) ifTrue:<br>
[inlineCAB := argIsInt or: [rcvrIsInt]].<br>
inlineCAB ifFalse:<br>
[^self genSpecialSelectorSend].<br>
<br>
targetBytecodePC := nextPC<br>
+ branchDescriptor numBytes<br>
+ (self spanFor: branchDescriptor at: nextPC exts: nExts in: methodObj).<br>
postBranchPC := nextPC + branchDescriptor numBytes.<br>
argIsInt<br>
ifTrue:<br>
[(self ssValue: 1) popToReg: ReceiverResultReg.<br>
annotateInst := self ssTop annotateUse.<br>
self ssPop: 2.<br>
self MoveR: ReceiverResultReg R: TempReg]<br>
ifFalse:<br>
[self marshallSendArguments: 1.<br>
self MoveR: Arg0Reg R: TempReg.<br>
rcvrIsInt ifFalse:<br>
[objectRepresentation isSmallIntegerTagNonZero<br>
ifTrue: [self AndR: ReceiverResultReg R: TempReg]<br>
ifFalse: [self OrR: ReceiverResultReg R: TempReg]]].<br>
jumpNotSmallInts := objectRepresentation genJumpNotSmallIntegerInScratchReg: TempReg.<br>
<br>
+ counterReg := self allocateRegNotConflictingWith: (self registerMaskFor: ReceiverResultReg and: Arg0Reg). "Use this as the count reg, can't conflict with the registers for the arg and the receiver"<br>
- counterReg := self allocateRegisterNotConflictingWith: (self registerMaskFor: ReceiverResultReg and: Arg0Reg). "Use this as the count reg, can't conflict with the registers for the arg and the receiver"<br>
self ssAllocateRequiredReg: counterReg. "Use this as the count reg."<br>
counterAddress := counters + ((self sizeof: #sqInt) * counterIndex).<br>
self flag: 'will need to use MoveAw32:R: if 64 bits'.<br>
self assert: objectMemory wordSize = CounterBytes.<br>
self MoveAw: counterAddress R: counterReg.<br>
self SubCq: 16r10000 R: counterReg. "Count executed"<br>
"If counter trips simply abort the inlined comparison and send continuing to the following<br>
branch *without* writing back. A double decrement will not trip the second time."<br>
countTripped := self JumpCarry: 0.<br>
self MoveR: counterReg Aw: counterAddress. "write back"<br>
<br>
argIsInt<br>
ifTrue: [annotateInst<br>
ifTrue: [self annotateBytecode: (self CmpCq: argInt R: ReceiverResultReg)]<br>
ifFalse: [self CmpCq: argInt R: ReceiverResultReg]]<br>
ifFalse: [self CmpR: Arg0Reg R: ReceiverResultReg].<br>
"Cmp is weird/backwards so invert the comparison. Further since there is a following conditional<br>
jump bytecode define non-merge fixups and leave the cond bytecode to set the mergeness."<br>
self gen: (branchDescriptor isBranchTrue<br>
ifTrue: [primDescriptor opcode]<br>
ifFalse: [self inverseBranchFor: primDescriptor opcode])<br>
operand: (self ensureNonMergeFixupAt: targetBytecodePC - initialPC) asUnsignedInteger.<br>
self SubCq: 1 R: counterReg. "Count untaken"<br>
self MoveR: counterReg Aw: counterAddress. "write back"<br>
self Jump: (self ensureNonMergeFixupAt: postBranchPC - initialPC).<br>
countTripped jmpTarget: (jumpNotSmallInts jmpTarget: self Label).<br>
argIsInt ifTrue:<br>
[self MoveCq: argInt R: Arg0Reg].<br>
^self genMarshalledSend: (coInterpreter specialSelector: byte0 - self firstSpecialSelectorBytecodeOffset)<br>
numArgs: 1<br>
sendTable: ordinarySendTrampolines!<br>
<br>
Item was changed:<br>
----- Method: SistaStackToRegisterMappingCogit>>genSpecialSelectorEqualsEqualsWithForwarders (in category 'bytecode generators') -----<br>
genSpecialSelectorEqualsEqualsWithForwarders<br>
"Override to count inlined branches if followed by a conditional branch.<br>
We borrow the following conditional branch's counter and when about to<br>
inline the comparison we decrement the counter (without writing it back)<br>
and if it trips simply abort the inlining, falling back to the normal send which<br>
will then continue to the conditional branch which will trip and enter the abort."<br>
| nextPC postBranchPC targetBytecodePC primDescriptor branchDescriptor nExts label counterReg fixup<br>
counterAddress countTripped unforwardArg unforwardRcvr argReg rcvrReg regMask |<br>
<var: #fixup type: #'BytecodeFixup *'><br>
<var: #countTripped type: #'AbstractInstruction *'><br>
<var: #primDescriptor type: #'BytecodeDescriptor *'><br>
<var: #branchDescriptor type: #'BytecodeDescriptor *'><br>
<br>
((coInterpreter isOptimizedMethod: methodObj) or: [needsFrame not]) ifTrue: [ ^ super genSpecialSelectorEqualsEquals ].<br>
<br>
primDescriptor := self generatorAt: byte0.<br>
regMask := 0.<br>
<br>
nextPC := bytecodePC + primDescriptor numBytes.<br>
nExts := 0.<br>
[branchDescriptor := self generatorAt: (objectMemory fetchByte: nextPC ofObject: methodObj) + (byte0 bitAnd: 256).<br>
branchDescriptor isExtension] whileTrue:<br>
[nExts := nExts + 1.<br>
nextPC := nextPC + branchDescriptor numBytes].<br>
<br>
(branchDescriptor isBranchTrue or: [branchDescriptor isBranchFalse]) ifTrue:<br>
[self ssFlushTo: simStackPtr - 2].<br>
<br>
unforwardRcvr := (objectRepresentation isUnannotatableConstant: (self ssValue: 1)) not.<br>
unforwardArg := (objectRepresentation isUnannotatableConstant: self ssTop) not.<br>
<br>
"if the rcvr or the arg is an annotable constant, we need to push it to a register<br>
else the forwarder check can't jump back to the comparison after unforwarding the constant"<br>
unforwardArg<br>
ifTrue:<br>
[unforwardRcvr<br>
ifTrue:<br>
+ [self allocateRegForStackTopTwoEntriesInto: [:rTop :rNext| argReg := rTop. rcvrReg := rNext].<br>
- [self allocateTwoRegistersInto: [:rTop :rNext| argReg := rTop. rcvrReg := rNext].<br>
self ssTop popToReg: argReg.<br>
(self ssValue:1) popToReg: rcvrReg]<br>
ifFalse:<br>
+ [argReg := self allocateRegForStackTopEntry.<br>
- [argReg := self allocateOneRegister.<br>
self ssTop popToReg: argReg]]<br>
ifFalse:<br>
[self assert: unforwardRcvr.<br>
+ rcvrReg := self allocateRegForStackEntryAt: 1.<br>
- rcvrReg := self allocateOneRegister.<br>
(self ssValue:1) popToReg: rcvrReg].<br>
<br>
argReg ifNotNil: [ regMask := self registerMaskFor: regMask ].<br>
rcvrReg ifNotNil: [ regMask := regMask bitOr: (self registerMaskFor: rcvrReg) ].<br>
<br>
"Here we can use Cq because the constant does not need to be annotated"<br>
self assert: (unforwardArg not or: [argReg notNil]).<br>
self assert: (unforwardRcvr not or: [rcvrReg notNil]).<br>
<br>
"Only interested in inlining if followed by a conditional branch."<br>
(branchDescriptor isBranchTrue or: [branchDescriptor isBranchFalse]) ifFalse:<br>
[^ self genDirectEqualsEqualsArg: unforwardArg rcvr: unforwardRcvr argReg: argReg rcvrReg: rcvrReg].<br>
<br>
targetBytecodePC := nextPC<br>
+ branchDescriptor numBytes<br>
+ (self spanFor: branchDescriptor at: nextPC exts: nExts in: methodObj).<br>
postBranchPC := nextPC + branchDescriptor numBytes.<br>
<br>
+ counterReg := self allocateRegNotConflictingWith: regMask. "Use this as the count reg, can't conflict with the registers for the arg and the receiver of #==."<br>
- counterReg := self allocateRegisterNotConflictingWith: regMask. "Use this as the count reg, can't conflict with the registers for the arg and the receiver of #==."<br>
counterAddress := counters + ((self sizeof: #sqInt) * counterIndex).<br>
self flag: 'will need to use MoveAw32:R: if 64 bits'.<br>
self assert: objectMemory wordSize = CounterBytes.<br>
self MoveAw: counterAddress R: counterReg.<br>
self SubCq: 16r10000 R: counterReg. "Count executed"<br>
"If counter trips simply abort the inlined comparison and send continuing to the following<br>
branch *without* writing back. A double decrement will not trip the second time."<br>
countTripped := self JumpCarry: 0.<br>
self MoveR: counterReg Aw: counterAddress. "write back"<br>
<br>
self assert: (unforwardArg or: [ unforwardRcvr ]).<br>
<br>
label := self Label.<br>
<br>
unforwardArg<br>
ifFalse: [ self CmpCq: self ssTop constant R: rcvrReg ]<br>
ifTrue: [ unforwardRcvr<br>
ifFalse: [ self CmpCq: (self ssValue: 1) constant R: argReg ]<br>
ifTrue: [ self CmpR: argReg R: rcvrReg ] ].<br>
<br>
self ssPop: 2.<br>
branchDescriptor isBranchTrue ifTrue:<br>
[ fixup := self ensureNonMergeFixupAt: postBranchPC - initialPC.<br>
self JumpZero: (self ensureNonMergeFixupAt: targetBytecodePC - initialPC) asUnsignedInteger.<br>
unforwardArg ifTrue: [ objectRepresentation genEnsureOopInRegNotForwarded: argReg scratchReg: TempReg jumpBackTo: label ].<br>
unforwardRcvr ifTrue: [ objectRepresentation genEnsureOopInRegNotForwarded: rcvrReg scratchReg: TempReg jumpBackTo: label ] ].<br>
branchDescriptor isBranchFalse ifTrue:<br>
[ fixup := self ensureNonMergeFixupAt: targetBytecodePC - initialPC.<br>
self JumpZero: (self ensureNonMergeFixupAt: postBranchPC - initialPC) asUnsignedInteger.<br>
unforwardArg ifTrue: [ objectRepresentation genEnsureOopInRegNotForwarded: argReg scratchReg: TempReg jumpBackTo: label ].<br>
unforwardRcvr ifTrue: [ objectRepresentation genEnsureOopInRegNotForwarded: rcvrReg scratchReg: TempReg jumpBackTo: label ] ].<br>
self ssPop: -2.<br>
<br>
"the jump has not been taken and forwarders have been followed."<br>
self SubCq: 1 R: counterReg. "Count untaken"<br>
self MoveR: counterReg Aw: counterAddress. "write back"<br>
self Jump: (self ensureNonMergeFixupAt: postBranchPC - initialPC).<br>
<br>
countTripped jmpTarget: self Label.<br>
<br>
"inlined version of #== ignoring the branchDescriptor if the counter trips to have normal state for the optimizer"<br>
^ self genDirectEqualsEqualsArg: unforwardArg rcvr: unforwardRcvr argReg: argReg rcvrReg: rcvrReg!<br>
<br>
Item was added:<br>
+ ----- Method: StackToRegisterMappingCogit>>allocateAnyReg (in category 'simulation stack') -----<br>
+ allocateAnyReg<br>
+ < inline: true ><br>
+ ^ self allocateRegNotConflictingWith: 0!<br>
<br>
Item was removed:<br>
- ----- Method: StackToRegisterMappingCogit>>allocateOneRegister (in category 'simulation stack') -----<br>
- allocateOneRegister<br>
-<br>
- self ssTop type = SSRegister ifTrue: [ ^ self ssTop register].<br>
-<br>
- ^ self allocateRegisterNotConflictingWith: 0<br>
- !<br>
<br>
Item was added:<br>
+ ----- Method: StackToRegisterMappingCogit>>allocateRegForStackEntryAt: (in category 'simulation stack') -----<br>
+ allocateRegForStackEntryAt: index<br>
+ <inline: true><br>
+ <var: #stackEntry type: #'CogSimStackEntry *'><br>
+ | stackEntry |<br>
+ stackEntry := self ssValue: index.<br>
+ stackEntry type = SSRegister ifTrue: [ ^ stackEntry register].<br>
+ ^ self allocateAnyReg<br>
+ !<br>
<br>
Item was added:<br>
+ ----- Method: StackToRegisterMappingCogit>>allocateRegForStackTopEntry (in category 'simulation stack') -----<br>
+ allocateRegForStackTopEntry<br>
+ ^ self allocateRegForStackEntryAt: 0<br>
+ !<br>
<br>
Item was added:<br>
+ ----- Method: StackToRegisterMappingCogit>>allocateRegForStackTopThreeEntriesInto:thirdIsReceiver: (in category 'simulation stack') -----<br>
+ allocateRegForStackTopThreeEntriesInto: trinaryBlock thirdIsReceiver: thirdIsReceiver<br>
+ <inline: true><br>
+ | topRegistersMask rTop rNext rThird |<br>
+<br>
+ topRegistersMask := 0.<br>
+<br>
+ (self ssTop type = SSRegister and: [ thirdIsReceiver not or: [ self ssTop register ~= ReceiverResultReg ] ]) ifTrue:<br>
+ [ topRegistersMask := self registerMaskFor: (rTop := self ssTop register)].<br>
+ ((self ssValue: 1) type = SSRegister and: [ thirdIsReceiver not or: [ (self ssValue: 1) register ~= ReceiverResultReg ] ]) ifTrue:<br>
+ [ topRegistersMask := topRegistersMask bitOr: (self registerMaskFor: (rNext := (self ssValue: 1) register))].<br>
+ ((self ssValue: 2) type = SSRegister and: [thirdIsReceiver not or: [ (self ssValue: 2) register = ReceiverResultReg ] ]) ifTrue:<br>
+ [ topRegistersMask := topRegistersMask bitOr: (self registerMaskFor: (rThird := (self ssValue: 2) register))].<br>
+<br>
+ rThird ifNil:<br>
+ [ thirdIsReceiver<br>
+ ifTrue:<br>
+ [ rThird := ReceiverResultReg. "Free ReceiverResultReg if it was not free"<br>
+ (self register: ReceiverResultReg isInMask: self liveRegisters) ifTrue:<br>
+ [ self ssAllocateRequiredReg: ReceiverResultReg ].<br>
+ optStatus isReceiverResultRegLive: false ]<br>
+ ifFalse: [ rThird := self allocateRegNotConflictingWith: topRegistersMask ].<br>
+ topRegistersMask := topRegistersMask bitOr: (self registerMaskFor: rThird) ].<br>
+<br>
+ rTop ifNil: [<br>
+ rTop := self allocateRegNotConflictingWith: topRegistersMask.<br>
+ topRegistersMask := topRegistersMask bitOr: (self registerMaskFor: rTop) ].<br>
+<br>
+ rNext ifNil: [ rNext := self allocateRegNotConflictingWith: topRegistersMask ].<br>
+<br>
+ ^ trinaryBlock value: rTop value: rNext value: rThird<br>
+<br>
+ !<br>
<br>
Item was added:<br>
+ ----- Method: StackToRegisterMappingCogit>>allocateRegForStackTopTwoEntriesInto: (in category 'simulation stack') -----<br>
+ allocateRegForStackTopTwoEntriesInto: binaryBlock<br>
+ <inline: true><br>
+ | topRegistersMask rTop rNext |<br>
+<br>
+ topRegistersMask := 0.<br>
+<br>
+ self ssTop type = SSRegister ifTrue:<br>
+ [ topRegistersMask := self registerMaskFor: (rTop := self ssTop register)].<br>
+ (self ssValue: 1) type = SSRegister ifTrue:<br>
+ [ topRegistersMask := topRegistersMask bitOr: (self registerMaskFor: (rNext := (self ssValue: 1) register))].<br>
+<br>
+ rTop ifNil: [ rTop := self allocateRegNotConflictingWith: topRegistersMask ].<br>
+<br>
+ rNext ifNil: [ rNext := self allocateRegNotConflictingWith: (self registerMaskFor: rTop) ].<br>
+<br>
+ ^ binaryBlock value: rTop value: rNext<br>
+<br>
+ !<br>
<br>
Item was added:<br>
+ ----- Method: StackToRegisterMappingCogit>>allocateRegNotConflictingWith: (in category 'simulation stack') -----<br>
+ allocateRegNotConflictingWith: regMask<br>
+ | reg |<br>
+ "if there's a free register, use it"<br>
+ reg := backEnd availableRegisterOrNilFor: (self liveRegisters bitOr: regMask).<br>
+ reg ifNil: "No free register, choose one that does not conflict with regMask"<br>
+ [reg := self freeAnyRegNotConflictingWith: regMask].<br>
+ reg = ReceiverResultReg ifTrue: "If we've allocated RcvrResultReg, it's not live anymore"<br>
+ [ optStatus isReceiverResultRegLive: false ].<br>
+ ^ reg!<br>
<br>
Item was removed:<br>
- ----- Method: StackToRegisterMappingCogit>>allocateRegisterNotConflictingWith: (in category 'simulation stack') -----<br>
- allocateRegisterNotConflictingWith: regMask<br>
- | reg |<br>
- "if there's a free register, use it"<br>
- reg := backEnd availableRegisterOrNilFor: (self liveRegisters bitOr: regMask).<br>
- reg ifNil: "No free register, choose one that does not conflict with regMask"<br>
- [reg := self freeRegisterNotConflictingWith: regMask].<br>
- reg = ReceiverResultReg ifTrue: "If we've allocated RcvrResultReg, it's not live anymore"<br>
- [ optStatus isReceiverResultRegLive: false ].<br>
- ^ reg!<br>
<br>
Item was removed:<br>
- ----- Method: StackToRegisterMappingCogit>>allocateThreeRegistersInto:thirdIsReceiver: (in category 'simulation stack') -----<br>
- allocateThreeRegistersInto: trinaryBlock thirdIsReceiver: thirdIsReceiver<br>
- <inline: true><br>
- | topRegistersMask rTop rNext rThird |<br>
-<br>
- topRegistersMask := 0.<br>
-<br>
- (self ssTop type = SSRegister and: [ thirdIsReceiver not or: [ self ssTop register ~= ReceiverResultReg ] ]) ifTrue:<br>
- [ topRegistersMask := self registerMaskFor: (rTop := self ssTop register)].<br>
- ((self ssValue: 1) type = SSRegister and: [ thirdIsReceiver not or: [ (self ssValue: 1) register ~= ReceiverResultReg ] ]) ifTrue:<br>
- [ topRegistersMask := topRegistersMask bitOr: (self registerMaskFor: (rNext := (self ssValue: 1) register))].<br>
- ((self ssValue: 2) type = SSRegister and: [thirdIsReceiver not or: [ (self ssValue: 2) register = ReceiverResultReg ] ]) ifTrue:<br>
- [ topRegistersMask := topRegistersMask bitOr: (self registerMaskFor: (rThird := (self ssValue: 2) register))].<br>
-<br>
- rThird ifNil:<br>
- [ thirdIsReceiver<br>
- ifTrue:<br>
- [ rThird := ReceiverResultReg. "Free ReceiverResultReg if it was not free"<br>
- (self register: ReceiverResultReg isInMask: self liveRegisters) ifTrue:<br>
- [ self ssAllocateRequiredReg: ReceiverResultReg ].<br>
- optStatus isReceiverResultRegLive: false ]<br>
- ifFalse: [ rThird := self allocateRegisterNotConflictingWith: topRegistersMask ].<br>
- topRegistersMask := topRegistersMask bitOr: (self registerMaskFor: rThird) ].<br>
-<br>
- rTop ifNil: [<br>
- rTop := self allocateRegisterNotConflictingWith: topRegistersMask.<br>
- topRegistersMask := topRegistersMask bitOr: (self registerMaskFor: rTop) ].<br>
-<br>
- rNext ifNil: [ rNext := self allocateRegisterNotConflictingWith: topRegistersMask ].<br>
-<br>
- ^ trinaryBlock value: rTop value: rNext value: rThird<br>
-<br>
- !<br>
<br>
Item was removed:<br>
- ----- Method: StackToRegisterMappingCogit>>allocateTwoRegistersInto: (in category 'simulation stack') -----<br>
- allocateTwoRegistersInto: binaryBlock<br>
- <inline: true><br>
- | topRegistersMask rTop rNext |<br>
-<br>
- topRegistersMask := 0.<br>
-<br>
- self ssTop type = SSRegister ifTrue:<br>
- [ topRegistersMask := self registerMaskFor: (rTop := self ssTop register)].<br>
- (self ssValue: 1) type = SSRegister ifTrue:<br>
- [ topRegistersMask := topRegistersMask bitOr: (self registerMaskFor: (rNext := (self ssValue: 1) register))].<br>
-<br>
- rTop ifNil: [ rTop := self allocateRegisterNotConflictingWith: topRegistersMask ].<br>
-<br>
- rNext ifNil: [ rNext := self allocateRegisterNotConflictingWith: (self registerMaskFor: rTop) ].<br>
-<br>
- ^ binaryBlock value: rTop value: rNext<br>
-<br>
- !<br>
<br>
Item was removed:<br>
- ----- Method: StackToRegisterMappingCogit>>availableRegister (in category 'simulation stack') -----<br>
- availableRegister<br>
- | reg |<br>
- reg := self availableRegisterOrNil.<br>
- reg ifNil: [self error: 'no available register'].<br>
- ^reg!<br>
<br>
Item was removed:<br>
- ----- Method: StackToRegisterMappingCogit>>availableRegisterOrNil (in category 'simulation stack') -----<br>
- availableRegisterOrNil<br>
- ^backEnd availableRegisterOrNilFor: self liveRegisters!<br>
<br>
Item was added:<br>
+ ----- Method: StackToRegisterMappingCogit>>freeAnyRegNotConflictingWith: (in category 'simulation stack') -----<br>
+ freeAnyRegNotConflictingWith: regMask<br>
+ "Spill the closest register on stack not conflicting with regMask.<br>
+ Assertion Failure if regMask has already all the registers"<br>
+ <var: #desc type: #'CogSimStackEntry *'><br>
+ | reg index |<br>
+ index := simSpillBase max: 0.<br>
+ [reg isNil and: [index < simStackPtr] ] whileTrue:<br>
+ [ | desc |<br>
+ desc := self simStackAt: index.<br>
+ desc type = SSRegister ifTrue:<br>
+ [ (regMask anyMask: (self registerMaskFor: desc register)) ifFalse:<br>
+ [ reg := desc register ] ].<br>
+ index := index + 1].<br>
+ self assert: reg notNil.<br>
+ self ssAllocateRequiredReg: reg.<br>
+ ^reg!<br>
<br>
Item was removed:<br>
- ----- Method: StackToRegisterMappingCogit>>freeRegisterNotConflictingWith: (in category 'simulation stack') -----<br>
- freeRegisterNotConflictingWith: regMask<br>
- "Spill the closest register on stack not conflicting with regMask.<br>
- Assertion Failure if regMask has already all the registers"<br>
- <var: #desc type: #'CogSimStackEntry *'><br>
- | reg index |<br>
- index := simSpillBase max: 0.<br>
- [reg isNil and: [index < simStackPtr] ] whileTrue:<br>
- [ | desc |<br>
- desc := self simStackAt: index.<br>
- desc type = SSRegister ifTrue:<br>
- [ (regMask anyMask: (self registerMaskFor: desc register)) ifFalse:<br>
- [ reg := desc register ] ].<br>
- index := index + 1].<br>
- self assert: reg notNil.<br>
- self ssAllocateRequiredReg: reg.<br>
- ^reg!<br>
<br>
Item was changed:<br>
----- Method: StackToRegisterMappingCogit>>genBinaryConstOpVarInlinePrimitive: (in category 'inline primitive generators') -----<br>
genBinaryConstOpVarInlinePrimitive: prim<br>
"Const op var version of binary inline primitives."<br>
"SistaV1: 248 11111000 iiiiiiii mjjjjjjj Call Primitive #iiiiiiii + (jjjjjjj * 256) m=1 means inlined primitive, no hard return after execution.<br>
See EncoderForSistaV1's class comment and StackInterpreter>>#binaryInlinePrimitive:"<br>
| ra val untaggedVal adjust |<br>
+ ra := self allocateRegForStackTopEntry.<br>
- ra := self allocateOneRegister.<br>
self ssTop popToReg: ra.<br>
self ssPop: 1.<br>
val := self ssTop constant.<br>
self ssPop: 1.<br>
untaggedVal := val - objectMemory smallIntegerTag.<br>
prim caseOf: {<br>
"0 through 6, +, -, *, /, //, \\, quo:, SmallInteger op SmallInteger => SmallInteger, no overflow"<br>
[0] -> [self AddCq: untaggedVal R: ra].<br>
[1] -> [self MoveCq: val R: TempReg.<br>
self SubR: ra R: TempReg.<br>
objectRepresentation genAddSmallIntegerTagsTo: TempReg.<br>
self MoveR: TempReg R: ra].<br>
[2] -> [objectRepresentation genRemoveSmallIntegerTagsInScratchReg: ra.<br>
self MoveCq: (objectMemory integerValueOf: val) R: TempReg.<br>
self MulR: TempReg R: ra.<br>
objectRepresentation genAddSmallIntegerTagsTo: ra].<br>
<br>
"2016 through 2019, bitAnd:, bitOr:, bitXor, bitShift:, SmallInteger op SmallInteger => SmallInteger, no overflow"<br>
<br>
"2032 through 2037, >, <, >=, <=. =, ~=, SmallInteger op SmallInteger => Boolean (flags?? then in jump bytecodes if ssTop is a flags value, just generate the instruction!!!!)"<br>
"CmpCqR is SubRCq so everything is reversed, but because no CmpRCq things are reversed again and we invert the sense of the jumps."<br>
[32] -> [ self CmpCq: val R: ra.<br>
self genBinaryInlineComparison: JumpLess opFalse: JumpGreaterOrEqual destReg: ra ].<br>
[33] -> [ self CmpCq: val R: ra.<br>
self genBinaryInlineComparison: JumpGreater opFalse: JumpLessOrEqual destReg: ra ].<br>
[34] -> [ self CmpCq: val R: ra.<br>
self genBinaryInlineComparison: JumpLessOrEqual opFalse: JumpGreater destReg: ra ].<br>
[35] -> [ self CmpCq: val R: ra.<br>
self genBinaryInlineComparison: JumpGreaterOrEqual opFalse: JumpLess destReg: ra ].<br>
[36] -> [ self CmpCq: val R: ra.<br>
self genBinaryInlineComparison: JumpZero opFalse: JumpNonZero destReg: ra ].<br>
[37] -> [ self CmpCq: val R: ra.<br>
self genBinaryInlineComparison: JumpNonZero opFalse: JumpZero destReg: ra ].<br>
<br>
"2064 through 2068, Pointer Object>>at:, Byte Object>>at:, Short16 Word Object>>at: LongWord32 Object>>at: Quad64Word Object>>at:. obj op 0-rel SmallInteger => oop"<br>
[64] -> [objectRepresentation genConvertSmallIntegerToIntegerInReg: ra.<br>
adjust := (objectMemory baseHeaderSize >> objectMemory shiftForWord) - 1. "shift by baseHeaderSize and then move from 1 relative to zero relative"<br>
adjust ~= 0 ifTrue: [ self AddCq: adjust R: ra. ].<br>
self genMoveConstant: val R: TempReg.<br>
self MoveXwr: ra R: TempReg R: ra].<br>
[65] -> [objectRepresentation genConvertSmallIntegerToIntegerInReg: ra.<br>
adjust := objectMemory baseHeaderSize - 1. "shift by baseHeaderSize and then move from 1 relative to zero relative"<br>
self AddCq: adjust R: ra.<br>
self genMoveConstant: val R: TempReg.<br>
self MoveXbr: ra R: TempReg R: ra.<br>
objectRepresentation genConvertIntegerToSmallIntegerInReg: ra]<br>
}<br>
otherwise: [^EncounteredUnknownBytecode].<br>
self ssPushRegister: ra.<br>
^0!<br>
<br>
Item was changed:<br>
----- Method: StackToRegisterMappingCogit>>genBinaryVarOpConstInlinePrimitive: (in category 'inline primitive generators') -----<br>
genBinaryVarOpConstInlinePrimitive: prim<br>
"Var op const version of inline binary inline primitives."<br>
"SistaV1: 248 11111000 iiiiiiii mjjjjjjj Call Primitive #iiiiiiii + (jjjjjjj * 256) m=1 means inlined primitive, no hard return after execution.<br>
See EncoderForSistaV1's class comment and StackInterpreter>>#binaryInlinePrimitive:"<br>
| rr val untaggedVal |<br>
val := self ssTop constant.<br>
self ssPop: 1.<br>
+ rr := self allocateRegForStackTopEntry.<br>
- rr := self allocateOneRegister.<br>
self ssTop popToReg: rr.<br>
self ssPop: 1.<br>
untaggedVal := val - objectMemory smallIntegerTag.<br>
prim caseOf: {<br>
"0 through 6, +, -, *, /, //, \\, quo:, SmallInteger op SmallInteger => SmallInteger, no overflow"<br>
[0] -> [self AddCq: untaggedVal R: rr].<br>
[1] -> [self SubCq: untaggedVal R: rr ].<br>
[2] -> [self flag: 'could use MulCq:R'.<br>
objectRepresentation genShiftAwaySmallIntegerTagsInScratchReg: rr.<br>
self MoveCq: (objectMemory integerValueOf: val) R: TempReg.<br>
self MulR: TempReg R: rr.<br>
objectRepresentation genAddSmallIntegerTagsTo: rr].<br>
<br>
"2016 through 2019, bitAnd:, bitOr:, bitXor, bitShift:, SmallInteger op SmallInteger => SmallInteger, no overflow"<br>
<br>
"2032 through 2037, >, <, >=, <=. =, ~=, SmallInteger op SmallInteger => Boolean (flags?? then in jump bytecodes if ssTop is a flags value, just generate the instruction!!!!)"<br>
"CmpCqR is SubRCq so everything is reversed."<br>
[32] -> [ self CmpCq: val R: rr.<br>
self genBinaryInlineComparison: JumpGreater opFalse: JumpLessOrEqual destReg: rr ].<br>
[33] -> [ self CmpCq: val R: rr.<br>
self genBinaryInlineComparison: JumpLess opFalse: JumpGreaterOrEqual destReg: rr ].<br>
[34] -> [ self CmpCq: val R: rr.<br>
self genBinaryInlineComparison: JumpGreaterOrEqual opFalse: JumpLess destReg: rr ].<br>
[35] -> [ self CmpCq: val R: rr.<br>
self genBinaryInlineComparison: JumpLessOrEqual opFalse: JumpGreater destReg: rr ].<br>
[36] -> [ self CmpCq: val R: rr.<br>
self genBinaryInlineComparison: JumpZero opFalse: JumpNonZero destReg: rr ].<br>
[37] -> [ self CmpCq: val R: rr.<br>
self genBinaryInlineComparison: JumpNonZero opFalse: JumpZero destReg: rr ].<br>
<br>
"2064 through 2068, Pointer Object>>at:, Byte Object>>at:, Short16 Word Object>>at: LongWord32 Object>>at: Quad64Word Object>>at:. obj op 0-rel SmallInteger => oop"<br>
[64] -> [objectRepresentation genLoadSlot: (objectMemory integerValueOf: val) - 1 sourceReg: rr destReg: rr].<br>
[65] -> [self MoveCq: (objectMemory integerValueOf: val) + objectMemory baseHeaderSize - 1 R: TempReg.<br>
self MoveXbr: TempReg R: rr R: rr.<br>
objectRepresentation genConvertIntegerToSmallIntegerInReg: rr]<br>
<br>
}<br>
otherwise: [^EncounteredUnknownBytecode].<br>
self ssPushRegister: rr.<br>
^0!<br>
<br>
Item was changed:<br>
----- Method: StackToRegisterMappingCogit>>genBinaryVarOpVarInlinePrimitive: (in category 'inline primitive generators') -----<br>
genBinaryVarOpVarInlinePrimitive: prim<br>
"Var op var version of binary inline primitives."<br>
"SistaV1: 248 11111000 iiiiiiii mjjjjjjj Call Primitive #iiiiiiii + (jjjjjjj * 256) m=1 means inlined primitive, no hard return after execution.<br>
See EncoderForSistaV1's class comment and StackInterpreter>>#binaryInlinePrimitive:"<br>
| ra rr adjust |<br>
+ self allocateRegForStackTopTwoEntriesInto: [:rTop :rNext | ra := rTop. rr := rNext ].<br>
- self allocateTwoRegistersInto: [:rTop :rNext | ra := rTop. rr := rNext ].<br>
self ssTop popToReg: ra.<br>
self ssPop: 1.<br>
self ssTop popToReg: rr.<br>
self ssPop: 1.<br>
prim caseOf: {<br>
"0 through 6, +, -, *, /, //, \\, quo:, SmallInteger op SmallInteger => SmallInteger, no overflow"<br>
[0] -> [objectRepresentation genRemoveSmallIntegerTagsInScratchReg: ra.<br>
self AddR: ra R: rr].<br>
[1] -> [self SubR: ra R: rr.<br>
objectRepresentation genAddSmallIntegerTagsTo: rr].<br>
[2] -> [objectRepresentation genRemoveSmallIntegerTagsInScratchReg: rr.<br>
objectRepresentation genShiftAwaySmallIntegerTagsInScratchReg: ra.<br>
self MulR: ra R: rr.<br>
objectRepresentation genAddSmallIntegerTagsTo: rr].<br>
<br>
"2016 through 2019, bitAnd:, bitOr:, bitXor, bitShift:, SmallInteger op SmallInteger => SmallInteger, no overflow"<br>
<br>
"2032 through 2037, >, <, >=, <=. =, ~=, SmallInteger op SmallInteger => Boolean (flags?? then in jump bytecodes if ssTop is a flags value, just generate the instruction!!!!)"<br>
"CmpCqR is SubRCq so everything is reversed."<br>
[32] -> [ self CmpR: ra R: rr.<br>
self genBinaryInlineComparison: JumpGreater opFalse: JumpLessOrEqual destReg: rr ].<br>
[33] -> [ self CmpR: ra R: rr.<br>
self genBinaryInlineComparison: JumpLess opFalse: JumpGreaterOrEqual destReg: rr ].<br>
[34] -> [ self CmpR: ra R: rr.<br>
self genBinaryInlineComparison: JumpGreaterOrEqual opFalse: JumpLess destReg: rr ].<br>
[35] -> [ self CmpR: ra R: rr.<br>
self genBinaryInlineComparison: JumpLessOrEqual opFalse: JumpGreater destReg: rr ].<br>
[36] -> [ self CmpR: ra R: rr.<br>
self genBinaryInlineComparison: JumpZero opFalse: JumpNonZero destReg: rr ].<br>
[37] -> [ self CmpR: ra R: rr.<br>
self genBinaryInlineComparison: JumpNonZero opFalse: JumpZero destReg: rr ].<br>
<br>
"2064 through 2068, Pointer Object>>at:, Byte Object>>at:, Short16 Word Object>>at: LongWord32 Object>>at: Quad64Word Object>>at:. obj op 0-rel SmallInteger => oop"<br>
[64] -> [objectRepresentation genConvertSmallIntegerToIntegerInReg: ra.<br>
adjust := (objectMemory baseHeaderSize >> objectMemory shiftForWord) - 1. "shift by baseHeaderSize and then move from 1 relative to zero relative"<br>
adjust ~= 0 ifTrue: [ self AddCq: adjust R: ra. ].<br>
self MoveXwr: ra R: rr R: rr ].<br>
[65] -> [objectRepresentation genConvertSmallIntegerToIntegerInReg: ra.<br>
adjust := objectMemory baseHeaderSize - 1. "shift by baseHeaderSize and then move from 1 relative to zero relative"<br>
self AddCq: adjust R: ra.<br>
self MoveXbr: ra R: rr R: rr.<br>
objectRepresentation genConvertIntegerToSmallIntegerInReg: rr]<br>
<br>
}<br>
otherwise: [^EncounteredUnknownBytecode].<br>
self ssPushRegister: rr.<br>
^0!<br>
<br>
Item was changed:<br>
----- Method: StackToRegisterMappingCogit>>genPushLiteralVariable: (in category 'bytecode generator support') -----<br>
genPushLiteralVariable: literalIndex<br>
<inline: false><br>
| association freeReg |<br>
+ freeReg := self allocateAnyReg.<br>
- freeReg := self ssAllocatePreferredReg: ClassReg.<br>
association := self getLiteral: literalIndex.<br>
"N.B. Do _not_ use ReceiverResultReg to avoid overwriting receiver in assignment in frameless methods."<br>
"So far descriptors are not rich enough to describe the entire dereference so generate the register<br>
load but don't push the result. There is an order-of-evaluation issue if we defer the dereference."<br>
self genMoveConstant: association R: TempReg.<br>
objectRepresentation<br>
genEnsureObjInRegNotForwarded: TempReg<br>
scratchReg: freeReg.<br>
objectRepresentation<br>
genLoadSlot: ValueIndex<br>
sourceReg: TempReg<br>
destReg: freeReg.<br>
self ssPushRegister: freeReg.<br>
^0!<br>
<br>
Item was changed:<br>
----- Method: StackToRegisterMappingCogit>>genSpecialSelectorEqualsEqualsWithForwarders (in category 'bytecode generators') -----<br>
genSpecialSelectorEqualsEqualsWithForwarders<br>
| primDescriptor nextPC nExts branchDescriptor unforwardRcvr argReg targetBytecodePC<br>
unforwardArg rcvrReg jumpNotEqual jumpEqual postBranchPC label fixup |<br>
<var: #fixup type: #'BytecodeFixup *'><br>
<var: #jumpEqual type: #'AbstractInstruction *'><br>
<var: #jumpNotEqual type: #'AbstractInstruction *'><br>
<var: #primDescriptor type: #'BytecodeDescriptor *'><br>
<var: #branchDescriptor type: #'BytecodeDescriptor *'><br>
<br>
primDescriptor := self generatorAt: byte0.<br>
<br>
nextPC := bytecodePC + primDescriptor numBytes.<br>
nExts := 0.<br>
[branchDescriptor := self generatorAt: (objectMemory fetchByte: nextPC ofObject: methodObj) + bytecodeSetOffset.<br>
branchDescriptor isExtension] whileTrue:<br>
[nExts := nExts + 1.<br>
nextPC := nextPC + branchDescriptor numBytes].<br>
"If branching the stack must be flushed for the merge"<br>
(branchDescriptor isBranchTrue or: [branchDescriptor isBranchFalse]) ifTrue:<br>
[self ssFlushTo: simStackPtr - 2].<br>
<br>
unforwardRcvr := (objectRepresentation isUnannotatableConstant: (self ssValue: 1)) not.<br>
unforwardArg := (objectRepresentation isUnannotatableConstant: self ssTop) not.<br>
<br>
"if the rcvr or the arg is an annotable constant, we need to push it to a register<br>
else the forwarder check can't jump back to the comparison after unforwarding the constant"<br>
unforwardArg<br>
ifTrue:<br>
[unforwardRcvr<br>
ifTrue:<br>
+ [self allocateRegForStackTopTwoEntriesInto: [:rTop :rNext| argReg := rTop. rcvrReg := rNext].<br>
- [self allocateTwoRegistersInto: [:rTop :rNext| argReg := rTop. rcvrReg := rNext].<br>
self ssTop popToReg: argReg.<br>
(self ssValue:1) popToReg: rcvrReg]<br>
ifFalse:<br>
+ [argReg := self allocateRegForStackTopEntry.<br>
- [argReg := self allocateOneRegister.<br>
self ssTop popToReg: argReg]]<br>
ifFalse:<br>
[self assert: unforwardRcvr.<br>
+ rcvrReg := self allocateRegForStackEntryAt: 1.<br>
- rcvrReg := self allocateOneRegister.<br>
(self ssValue:1) popToReg: rcvrReg].<br>
<br>
label := self Label.<br>
<br>
"Here we can use Cq because the constant does not need to be annotated"<br>
self assert: (unforwardArg not or: [argReg notNil]).<br>
self assert: (unforwardRcvr not or: [rcvrReg notNil]).<br>
unforwardArg<br>
ifFalse: [ self CmpCq: self ssTop constant R: rcvrReg ]<br>
ifTrue: [ unforwardRcvr<br>
ifFalse: [ self CmpCq: (self ssValue: 1) constant R: argReg ]<br>
ifTrue: [ self CmpR: argReg R: rcvrReg ] ].<br>
<br>
self ssPop: 2.<br>
<br>
"If not followed by a branch, resolve to true or false."<br>
(branchDescriptor isBranchTrue or: [branchDescriptor isBranchFalse]) ifFalse:<br>
[jumpEqual := self JumpZero: 0.<br>
unforwardArg ifTrue: [ objectRepresentation genEnsureOopInRegNotForwarded: argReg scratchReg: TempReg jumpBackTo: label ].<br>
unforwardRcvr ifTrue: [ objectRepresentation genEnsureOopInRegNotForwarded: rcvrReg scratchReg: TempReg jumpBackTo: label ].<br>
self genMoveFalseR: rcvrReg.<br>
jumpNotEqual := self Jump: 0.<br>
jumpEqual jmpTarget: (self genMoveTrueR: rcvrReg).<br>
jumpNotEqual jmpTarget: self Label.<br>
self ssPushRegister: rcvrReg.<br>
^0].<br>
<br>
"Further since there is a following conditional jump bytecode, define<br>
non-merge fixups and leave the cond bytecode to set the mergeness."<br>
targetBytecodePC := nextPC<br>
+ branchDescriptor numBytes<br>
+ (self spanFor: branchDescriptor at: nextPC exts: nExts in: methodObj).<br>
postBranchPC := nextPC + branchDescriptor numBytes.<br>
(self fixupAt: nextPC - initialPC) targetInstruction = 0<br>
ifTrue: "The next instruction is dead. we can skip it."<br>
[deadCode := true.<br>
self ensureFixupAt: targetBytecodePC - initialPC.<br>
self ensureFixupAt: postBranchPC - initialPC]<br>
ifFalse:<br>
[self ssPushConstant: objectMemory trueObject]. "dummy value"<br>
<br>
self assert: (unforwardArg or: [ unforwardRcvr ]).<br>
branchDescriptor isBranchTrue ifTrue:<br>
[ deadCode ifFalse: [ fixup := self ensureNonMergeFixupAt: postBranchPC - initialPC ].<br>
self JumpZero: (self ensureNonMergeFixupAt: targetBytecodePC - initialPC) asUnsignedInteger.<br>
unforwardArg ifTrue: [ (deadCode or: [ unforwardRcvr ])<br>
ifTrue: [ objectRepresentation genEnsureOopInRegNotForwarded: argReg scratchReg: TempReg jumpBackTo: label ]<br>
ifFalse: [ objectRepresentation<br>
genEnsureOopInRegNotForwarded: argReg<br>
scratchReg: TempReg<br>
ifForwarder: label<br>
ifNotForwarder: fixup ] ].<br>
unforwardRcvr ifTrue: [ deadCode<br>
ifTrue: [objectRepresentation genEnsureOopInRegNotForwarded: rcvrReg scratchReg: TempReg jumpBackTo: label ]<br>
ifFalse: [objectRepresentation<br>
genEnsureOopInRegNotForwarded: rcvrReg<br>
scratchReg: TempReg<br>
ifForwarder: label<br>
ifNotForwarder: fixup ] ] ].<br>
branchDescriptor isBranchFalse ifTrue:<br>
[ fixup := self ensureNonMergeFixupAt: targetBytecodePC - initialPC.<br>
self JumpZero: (self ensureNonMergeFixupAt: postBranchPC - initialPC) asUnsignedInteger.<br>
unforwardArg ifTrue: [ unforwardRcvr<br>
ifFalse: [objectRepresentation<br>
genEnsureOopInRegNotForwarded: argReg<br>
scratchReg: TempReg<br>
ifForwarder: label<br>
ifNotForwarder: fixup ]<br>
ifTrue: [ objectRepresentation genEnsureOopInRegNotForwarded: argReg scratchReg: TempReg jumpBackTo: label ] ].<br>
unforwardRcvr ifTrue:<br>
[ objectRepresentation<br>
genEnsureOopInRegNotForwarded: rcvrReg<br>
scratchReg: TempReg<br>
ifForwarder: label<br>
ifNotForwarder: fixup ].<br>
"Not reached"].<br>
^0!<br>
<br>
Item was changed:<br>
----- Method: StackToRegisterMappingCogit>>genTrinaryInlinePrimitive: (in category 'inline primitive generators') -----<br>
genTrinaryInlinePrimitive: prim<br>
"Unary inline primitives."<br>
"SistaV1: 248 11111000 iiiiiiii mjjjjjjj Call Primitive #iiiiiiii + (jjjjjjj * 256) m=1 means inlined primitive, no hard return after execution.<br>
See EncoderForSistaV1's class comment and StackInterpreter>>#trinaryInlinePrimitive:"<br>
<br>
| ra1 ra2 rr adjust |<br>
"The store check requires rr to be ReceiverResultReg"<br>
+ self<br>
+ allocateRegForStackTopThreeEntriesInto: [:rTop :rNext :rThird | ra2 := rTop. ra1 := rNext. rr := rThird ]<br>
+ thirdIsReceiver: prim = 0.<br>
- self allocateThreeRegistersInto: [:rTop :rNext :rThird | ra2 := rTop. ra1 := rNext. rr := rThird ] thirdIsReceiver: prim = 0.<br>
self assert: (rr ~= ra1 and: [rr ~= ra2 and: [ra1 ~= ra2]]).<br>
self ssTop popToReg: ra2.<br>
self ssPop: 1.<br>
self ssTop popToReg: ra1.<br>
self ssPop: 1.<br>
self ssTop popToReg: rr.<br>
self ssPop: 1.<br>
objectRepresentation genConvertSmallIntegerToIntegerInReg: ra1.<br>
"Now: ra is the variable object, rr is long, TempReg holds the value to store."<br>
prim caseOf: {<br>
"0 - 1 pointerAt:put: and byteAt:Put:"<br>
[0] -> [ adjust := (objectMemory baseHeaderSize >> objectMemory shiftForWord) - 1. "shift by baseHeaderSize and then move from 1 relative to zero relative"<br>
adjust ~= 0 ifTrue: [ self AddCq: adjust R: ra1. ].<br>
self MoveR: ra2 Xwr: ra1 R: rr.<br>
objectRepresentation genStoreCheckReceiverReg: rr valueReg: ra2 scratchReg: TempReg inFrame: true].<br>
[1] -> [ objectRepresentation genConvertSmallIntegerToIntegerInReg: ra2.<br>
adjust := objectMemory baseHeaderSize - 1. "shift by baseHeaderSize and then move from 1 relative to zero relative"<br>
self AddCq: adjust R: ra1.<br>
self MoveR: ra2 Xbr: ra1 R: rr.<br>
objectRepresentation genConvertIntegerToSmallIntegerInReg: ra2. ]<br>
}<br>
otherwise: [^EncounteredUnknownBytecode].<br>
self ssPushRegister: ra2.<br>
^0!<br>
<br>
Item was changed:<br>
----- Method: StackToRegisterMappingCogit>>genUnaryInlinePrimitive: (in category 'inline primitive generators') -----<br>
genUnaryInlinePrimitive: prim<br>
"Unary inline primitives."<br>
"SistaV1: 248 11111000 iiiiiiii mjjjjjjj Call Primitive #iiiiiiii + (jjjjjjj * 256) m=1 means inlined primitive, no hard return after execution.<br>
See EncoderForSistaV1's class comment and StackInterpreter>>#unaryInlinePrimitive:"<br>
| rcvrReg resultReg |<br>
+ rcvrReg := self allocateRegForStackTopEntry.<br>
+ resultReg := self allocateRegNotConflictingWith: (self registerMaskFor: rcvrReg).<br>
- rcvrReg := self allocateOneRegister.<br>
- resultReg := self allocateRegisterNotConflictingWith: (self registerMaskFor: rcvrReg).<br>
self ssTop popToReg: rcvrReg.<br>
self ssPop: 1.<br>
prim<br>
caseOf: {<br>
"00 unchecked class"<br>
[1] -> "01 unchecked pointer numSlots"<br>
[objectRepresentation<br>
genGetNumSlotsOf: rcvrReg into: resultReg;<br>
genConvertIntegerToSmallIntegerInReg: resultReg].<br>
"02 unchecked pointer basicSize"<br>
[3] -> "03 unchecked byte numBytes"<br>
[objectRepresentation<br>
genGetNumBytesOf: rcvrReg into: resultReg;<br>
genConvertIntegerToSmallIntegerInReg: resultReg].<br>
"04 unchecked short16Type format numShorts"<br>
"05 unchecked word32Type format numWords"<br>
"06 unchecked doubleWord64Type format numDoubleWords"<br>
}<br>
otherwise:<br>
[^EncounteredUnknownBytecode]..<br>
self ssPushRegister: resultReg.<br>
^0!<br>
<br>
Item was changed:<br>
----- Method: StackToRegisterMappingCogit>>genVanillaSpecialSelectorEqualsEquals (in category 'bytecode generators') -----<br>
genVanillaSpecialSelectorEqualsEquals<br>
| nextPC postBranchPC targetBytecodePC primDescriptor branchDescriptor nExts<br>
jumpEqual jumpNotEqual rcvrReg argReg argIsConstant rcvrIsConstant |<br>
<var: #jumpEqual type: #'AbstractInstruction *'><br>
<var: #jumpNotEqual type: #'AbstractInstruction *'><br>
<var: #primDescriptor type: #'BytecodeDescriptor *'><br>
<var: #branchDescriptor type: #'BytecodeDescriptor *'><br>
primDescriptor := self generatorAt: byte0.<br>
<br>
nextPC := bytecodePC + primDescriptor numBytes.<br>
nExts := 0.<br>
[branchDescriptor := self generatorAt: (objectMemory fetchByte: nextPC ofObject: methodObj) + bytecodeSetOffset.<br>
branchDescriptor isExtension] whileTrue:<br>
[nExts := nExts + 1.<br>
nextPC := nextPC + branchDescriptor numBytes].<br>
"If branching the stack must be flushed for the merge"<br>
(branchDescriptor isBranchTrue or: [branchDescriptor isBranchFalse]) ifTrue:<br>
[self ssFlushTo: simStackPtr - 2].<br>
<br>
"Don't use ReceiverResultReg for receiver to keep ReceiverResultReg live.<br>
Optimize e.g. rcvr == nil, the common case for ifNil: et al."<br>
<br>
argIsConstant := self ssTop type = SSConstant.<br>
rcvrIsConstant := argIsConstant and: [ (self ssValue:1) type = SSConstant ].<br>
<br>
argIsConstant<br>
ifFalse:<br>
[rcvrIsConstant<br>
ifFalse:<br>
+ [self allocateRegForStackTopTwoEntriesInto: [:rTop :rNext| argReg := rTop. rcvrReg := rNext].<br>
- [self allocateTwoRegistersInto: [:rTop :rNext| argReg := rTop. rcvrReg := rNext].<br>
self ssTop popToReg: argReg.<br>
(self ssValue:1) popToReg: rcvrReg]<br>
ifTrue:<br>
+ [argReg := self allocateRegForStackTopEntry.<br>
- [argReg := self allocateOneRegister.<br>
self ssTop popToReg: argReg]]<br>
ifTrue:<br>
[self assert: rcvrIsConstant not.<br>
+ rcvrReg := self allocateRegForStackEntryAt: 1.<br>
- rcvrReg := self allocateOneRegister.<br>
(self ssValue:1) popToReg: rcvrReg].<br>
<br>
argIsConstant<br>
ifTrue: [ self genCompConstant: self ssTop constant R: rcvrReg ]<br>
ifFalse: [ rcvrIsConstant<br>
ifTrue: [ self genCompConstant: (self ssValue: 1) constant R: argReg ]<br>
ifFalse: [ self CmpR: argReg R: rcvrReg ] ].<br>
<br>
self ssPop: 2.<br>
<br>
"If not followed by a branch, resolve to true or false."<br>
(branchDescriptor isBranchTrue or: [branchDescriptor isBranchFalse]) ifFalse:<br>
[jumpNotEqual := self JumpNonZero: 0.<br>
self genMoveTrueR: rcvrReg.<br>
jumpEqual := self Jump: 0.<br>
jumpNotEqual jmpTarget: (self genMoveFalseR: rcvrReg).<br>
jumpEqual jmpTarget: self Label.<br>
self ssPushRegister: rcvrReg.<br>
^0].<br>
<br>
"Further since there is a following conditional jump bytecode, define<br>
non-merge fixups and leave the cond bytecode to set the mergeness."<br>
targetBytecodePC := nextPC<br>
+ branchDescriptor numBytes<br>
+ (self spanFor: branchDescriptor at: nextPC exts: nExts in: methodObj).<br>
postBranchPC := nextPC + branchDescriptor numBytes.<br>
(self fixupAt: nextPC - initialPC) targetInstruction = 0<br>
ifTrue: "The next instruction is dead. we can skip it."<br>
[deadCode := true.<br>
self ensureFixupAt: targetBytecodePC - initialPC.<br>
self ensureFixupAt: postBranchPC - initialPC]<br>
ifFalse:<br>
[self ssPushConstant: objectMemory trueObject]. "dummy value"<br>
self gen: (branchDescriptor isBranchTrue ifTrue: [JumpZero] ifFalse: [JumpNonZero])<br>
operand: (self ensureNonMergeFixupAt: targetBytecodePC - initialPC) asUnsignedInteger.<br>
deadCode ifFalse: [self Jump: (self ensureNonMergeFixupAt: postBranchPC - initialPC)].<br>
^0!<br>
<br>
Item was removed:<br>
- ----- Method: StackToRegisterMappingCogit>>ssAllocatePreferredReg: (in category 'simulation stack') -----<br>
- ssAllocatePreferredReg: preferredReg<br>
- | preferredMask lastPreferred liveRegs |<br>
- lastPreferred := -1.<br>
- "compute live regs while noting the last occurrence of preferredReg.<br>
- If there are none free we must spill from simSpillBase to last occurrence."<br>
- preferredMask := (self registerMaskFor: preferredReg).<br>
- liveRegs := self registerMaskFor: TempReg and: FPReg and: SPReg.<br>
- (simSpillBase max: 0) to: simStackPtr do:<br>
- [:i|<br>
- liveRegs := liveRegs bitOr: (self simStackAt: i) registerMask.<br>
- (liveRegs bitAnd: preferredMask) ~= 0 ifTrue:<br>
- [lastPreferred := i]].<br>
- "If preferredReg is not live we can allocate it."<br>
- (self register: preferredReg isInMask: liveRegs) ifFalse:<br>
- [^preferredReg].<br>
- "If any other is not live we can allocate it."<br>
- GPRegMin to: GPRegMax do:<br>
- [:reg|<br>
- (self register: reg isInMask: liveRegs) ifFalse:<br>
- [^reg]].<br>
- "All live, must spill"<br>
- self ssFlushTo: lastPreferred.<br>
- self assert: (self liveRegisters bitAnd: preferredMask) = 0.<br>
- ^preferredReg!<br>
<br>
</blockquote></div><br></div>