[Vm-dev] VM Maker: VMMaker.oscog-cb.1257.mcz

commits at source.squeak.org commits at source.squeak.org
Sat Apr 25 14:24:30 UTC 2015


Eliot Miranda uploaded a new version of VMMaker to project VM Maker:
http://source.squeak.org/VMMaker/VMMaker.oscog-cb.1257.mcz

==================== Summary ====================

Name: VMMaker.oscog-cb.1257
Author: cb
Time: 25 April 2015, 4:23:00.043 pm
UUID: 2ad6cc71-391b-4b83-b2b6-ccb90ceb6d00
Ancestors: VMMaker.oscog-cb.1256

Extracted the count logic in SistaCogit to be in one method called by the 3 methods generating branches to avoid duplication. 

I believe there's a bug in JumpIf:, if the couterReg is callerSaved then I think it is not saved correctly. That was probably not an issue before I introduced the better register allocation because Eliot most probably carefully picked a register for counterReg which was callee saved in x86.

Added a few comments and indentation fix in StackToRegisterMappingCogit.

=============== Diff against VMMaker.oscog-cb.1256 ===============

Item was added:
+ ----- Method: SistaStackToRegisterMappingCogit>>genExecutionCountLogicInto:counterRegNotConflictingWith: (in category 'bytecode generator support') -----
+ genExecutionCountLogicInto: trinaryBlock counterRegNotConflictingWith: regMask
+ 	<var: #countTripped type: #'AbstractInstruction *'>
+ 	<inline: true>
+ 	| counterReg counterAddress countTripped |
+ 	counterReg := self allocateRegNotConflictingWith: regMask.
+ 	counterAddress := counters + ((self sizeof: #sqInt) * counterIndex).
+ 	self flag: 'will need to use MoveAw32:R: if 64 bits'.
+ 	self assert: objectMemory wordSize = CounterBytes.
+ 	self MoveAw: counterAddress R: counterReg.
+ 	self SubCq: 16r10000 R: counterReg. "Count executed"
+ 	"If counter trips simply abort the comparison continuing to the following
+ 	 branch *without* writing back.  A double decrement will not trip the second time."
+ 	countTripped := self JumpCarry: 0.
+ 	self MoveR: counterReg Aw: counterAddress. "write back"
+ 	trinaryBlock value: counterReg value: counterAddress value: countTripped!

Item was added:
+ ----- Method: SistaStackToRegisterMappingCogit>>genFallsThroughCountLogicCounterReg:counterAddress: (in category 'bytecode generator support') -----
+ genFallsThroughCountLogicCounterReg: counterReg counterAddress: counterAddress
+ 	<inline: true>
+ 	"Gen this when the branch has not been taken and forwarders have been followed."
+ 	self SubCq: 1 R: counterReg. "Count untaken"
+ 	self MoveR: counterReg Aw: counterAddress. "write back"!

Item was changed:
  ----- Method: SistaStackToRegisterMappingCogit>>genJumpIf:to: (in category 'bytecode generator support') -----
  genJumpIf: boolean to: targetBytecodePC
  	"The heart of performance counting in Sista.  Conditional branches are 6 times less
  	 frequent than sends and can provide basic block frequencies (send counters can't).
  	 Each conditional has a 32-bit counter split into an upper 16 bits counting executions
  	 and a lower half counting untaken executions of the branch.  Executing the branch
  	 decrements the upper half, tripping if the count goes negative.  Not taking the branch
  	 decrements the lower half.  N.B. We *do not* eliminate dead branches (true ifTrue:/true ifFalse:)
  	 so that scanning for send and branch data is simplified and that branch data is correct."
  	<inline: false>
  	| desc ok counterAddress countTripped retry counterReg |
  	<var: #ok type: #'AbstractInstruction *'>
  	<var: #desc type: #'CogSimStackEntry *'>
  	<var: #retry type: #'AbstractInstruction *'>
  	<var: #countTripped type: #'AbstractInstruction *'>
  
  	(coInterpreter isOptimizedMethod: methodObj) ifTrue: [ ^ super genJumpIf: boolean to: targetBytecodePC ].
  
  	self ssFlushTo: simStackPtr - 1.
  	desc := self ssTop.
  	self ssPop: 1.
  	desc popToReg: TempReg.
+ 	
+ 	self 
+ 		genExecutionCountLogicInto: [ :cReg :cAddress :countTripBranch | 
+ 			counterReg := cReg. 
+ 			counterAddress := cAddress. 
+ 			countTripped := countTripBranch ] 
+ 		counterRegNotConflictingWith: 0.
- 
- 	counterReg := self allocateAnyReg.
- 	counterAddress := counters + ((self sizeof: #sqInt) * counterIndex).
  	counterIndex := counterIndex + 1.
+ 	
+ 	retry := self Label.
- 	self flag: 'will need to use MoveAw32:R: if 64 bits'.
- 	self assert: objectMemory wordSize = CounterBytes.
- 	retry := self MoveAw: counterAddress R: counterReg.
- 	self SubCq: 16r10000 R: counterReg. "Count executed"
- 	"Don't write back if we trip; avoids wrapping count back to initial value, and if we trip we don't execute."
- 	countTripped := self JumpCarry: 0.
- 	self MoveR: counterReg Aw: counterAddress. "write back"
  
  	"Cunning trick by LPD.  If true and false are contiguous subtract the smaller.
  	 Correct result is either 0 or the distance between them.  If result is not 0 or
  	 their distance send mustBeBoolean."
  	self assert: (objectMemory objectAfter: objectMemory falseObject) = objectMemory trueObject.
  	self annotate: (self SubCw: boolean R: TempReg) objRef: boolean.
  	self JumpZero: (self ensureFixupAt: targetBytecodePC - initialPC).
  
+ 	self genFallsThroughCountLogicCounterReg: counterReg counterAddress: counterAddress.
- 	self SubCq: 1 R: counterReg. "Count untaken"
- 	self MoveR: counterReg Aw: counterAddress. "write back"
  
  	self CmpCq: (boolean == objectMemory falseObject
  					ifTrue: [objectMemory trueObject - objectMemory falseObject]
  					ifFalse: [objectMemory falseObject - objectMemory trueObject])
  		R: TempReg.
  	ok := self JumpZero: 0.
  	self MoveCq: 0 R: counterReg. "if counterReg is 0 this is a mustBeBoolean, not a counter trip."
  	countTripped jmpTarget:
  		(self CallRT: (boolean == objectMemory falseObject
  						ifTrue: [ceSendMustBeBooleanAddFalseTrampoline]
  						ifFalse: [ceSendMustBeBooleanAddTrueTrampoline])).
  	"If we're in an image which hasn't got the Sista code loaded then the ceCounterTripped:
  	 trampoline will return directly to machine code, returning the boolean.  So the code should
  	 jump back to the retry point. The trampoline makes sure that TempReg has been reloaded."
  	self annotateBytecode: self Label.
  	self Jump: retry.
  	ok jmpTarget: self Label.
  	^0!

Item was changed:
  ----- Method: SistaStackToRegisterMappingCogit>>genSpecialSelectorComparison (in category 'bytecode generators') -----
  genSpecialSelectorComparison
  	"Override to count inlined branches if followed by a conditional branch.
  	 We borrow the following conditional branch's counter and when about to
  	 inline the comparison we decrement the counter (without writing it back)
  	 and if it trips simply abort the inlining, falling back to the normal send which
  	 will then continue to the conditional branch which will trip and enter the abort."
  	| nextPC postBranchPC targetBytecodePC primDescriptor branchDescriptor
  	  rcvrIsInt argIsInt rcvrInt argInt result jumpNotSmallInts inlineCAB annotateInst
  	  counterAddress countTripped counterReg |
  	<var: #countTripped type: #'AbstractInstruction *'>
  	<var: #primDescriptor type: #'BytecodeDescriptor *'>
  	<var: #jumpNotSmallInts type: #'AbstractInstruction *'>
  	<var: #branchDescriptor type: #'BytecodeDescriptor *'>
  
  	(coInterpreter isOptimizedMethod: methodObj) ifTrue: [ ^ super genSpecialSelectorComparison ].
  
  	self ssFlushTo: simStackPtr - 2.
  	primDescriptor := self generatorAt: byte0.
  	argIsInt := self ssTop type = SSConstant
  				 and: [objectMemory isIntegerObject: (argInt := self ssTop constant)].
  	rcvrIsInt := (self ssValue: 1) type = SSConstant
  				 and: [objectMemory isIntegerObject: (rcvrInt := (self ssValue: 1) constant)].
  
  	"short-cut the jump if operands are SmallInteger constants."
  	(argIsInt and: [rcvrIsInt]) ifTrue:
  		[self cCode: '' inSmalltalk: "In Simulator ints are unsigned..."
  				[rcvrInt := objectMemory integerValueOf: rcvrInt.
  				argInt := objectMemory integerValueOf: argInt].
  		 primDescriptor opcode caseOf: {
  			[JumpLess]				-> [result := rcvrInt < argInt].
  			[JumpLessOrEqual]		-> [result := rcvrInt <= argInt].
  			[JumpGreater]			-> [result := rcvrInt > argInt].
  			[JumpGreaterOrEqual]	-> [result := rcvrInt >= argInt].
  			[JumpZero]				-> [result := rcvrInt = argInt].
  			[JumpNonZero]			-> [result := rcvrInt ~= argInt] }.
  		 "Must enter any annotatedConstants into the map"
  		 self annotateBytecodeIfAnnotated: (self ssValue: 1).
  		 self annotateBytecodeIfAnnotated: self ssTop.
  		 "Must annotate the bytecode for correct pc mapping."
  		 self ssPop: 2.
  		 ^self ssPushAnnotatedConstant: (result
  											ifTrue: [objectMemory trueObject]
  											ifFalse: [objectMemory falseObject])].
  
  	self extractMaybeBranchDescriptorInto: [ :descr :next :postBranch :target | 
  		branchDescriptor := descr. nextPC := next. postBranchPC := postBranch. targetBytecodePC := target ].
  	
  	"Only interested in inlining if followed by a conditional branch."
  	inlineCAB := branchDescriptor isBranchTrue or: [branchDescriptor isBranchFalse].
  	"Further, only interested in inlining = and ~= if there's a SmallInteger constant involved.
  	 The relational operators successfully statically predict SmallIntegers; the equality operators do not."
  	(inlineCAB and: [primDescriptor opcode = JumpZero or: [primDescriptor opcode = JumpNonZero]]) ifTrue:
  		[inlineCAB := argIsInt or: [rcvrIsInt]].
  	inlineCAB ifFalse:
  		[^self genSpecialSelectorSend].
  
  	argIsInt
  		ifTrue:
  			[(self ssValue: 1) popToReg: ReceiverResultReg.
  			 annotateInst := self ssTop annotateUse.
  			 self ssPop: 2.
  			 self MoveR: ReceiverResultReg R: TempReg]
  		ifFalse:
  			[self marshallSendArguments: 1.
  			 self MoveR: Arg0Reg R: TempReg.
  			 rcvrIsInt ifFalse:
  				[objectRepresentation isSmallIntegerTagNonZero
  					ifTrue: [self AndR: ReceiverResultReg R: TempReg]
  					ifFalse: [self OrR: ReceiverResultReg R: TempReg]]].
  	jumpNotSmallInts := objectRepresentation genJumpNotSmallIntegerInScratchReg: TempReg.
  
+ 	self 
+ 		genExecutionCountLogicInto: [ :cReg :cAddress :countTripBranch | 
+ 			counterReg := cReg. 
+ 			counterAddress := cAddress. 
+ 			countTripped := countTripBranch ] 
+ 		counterRegNotConflictingWith: (self registerMaskFor: ReceiverResultReg and: Arg0Reg).
- 	counterReg := self allocateRegNotConflictingWith: (self registerMaskFor: ReceiverResultReg and: Arg0Reg). "Use this as the count reg, can't conflict with the registers for the arg and the receiver"
- 	counterAddress := counters + ((self sizeof: #sqInt) * counterIndex).
- 	self flag: 'will need to use MoveAw32:R: if 64 bits'.
- 	self assert: objectMemory wordSize = CounterBytes.
- 	self MoveAw: counterAddress R: counterReg.
- 	self SubCq: 16r10000 R: counterReg. "Count executed"
- 	"If counter trips simply abort the inlined comparison and send continuing to the following
- 	 branch *without* writing back.  A double decrement will not trip the second time."
- 	countTripped := self JumpCarry: 0.
- 	self MoveR: counterReg Aw: counterAddress. "write back"
  
  	argIsInt
  		ifTrue: [annotateInst
  					ifTrue: [self annotateBytecode: (self CmpCq: argInt R: ReceiverResultReg)]
  					ifFalse: [self CmpCq: argInt R: ReceiverResultReg]]
  		ifFalse: [self CmpR: Arg0Reg R: ReceiverResultReg].
  	"Cmp is weird/backwards so invert the comparison.  Further since there is a following conditional
  	 jump bytecode define non-merge fixups and leave the cond bytecode to set the mergeness."
  	self gen: (branchDescriptor isBranchTrue
  				ifTrue: [primDescriptor opcode]
  				ifFalse: [self inverseBranchFor: primDescriptor opcode])
  		operand: (self ensureNonMergeFixupAt: targetBytecodePC - initialPC) asUnsignedInteger.
+ 		
+ 	self genFallsThroughCountLogicCounterReg: counterReg counterAddress: counterAddress.
+ 	
- 	self SubCq: 1 R: counterReg. "Count untaken"
- 	self MoveR: counterReg Aw: counterAddress. "write back"
  	self Jump: (self ensureNonMergeFixupAt: postBranchPC - initialPC).
  	countTripped jmpTarget: (jumpNotSmallInts jmpTarget: self Label).
+ 	
  	argIsInt ifTrue:
  		[self MoveCq: argInt R: Arg0Reg].
  	^self genMarshalledSend: (coInterpreter specialSelector: byte0 - self firstSpecialSelectorBytecodeOffset)
  		numArgs: 1
  		sendTable: ordinarySendTrampolines!

Item was changed:
  ----- Method: SistaStackToRegisterMappingCogit>>genSpecialSelectorEqualsEqualsWithForwarders (in category 'bytecode generators') -----
  genSpecialSelectorEqualsEqualsWithForwarders
  	"Override to count inlined branches if followed by a conditional branch.
  	 We borrow the following conditional branch's counter and when about to
  	 inline the comparison we decrement the counter (without writing it back)
  	 and if it trips simply abort the inlining, falling back to the normal send which
  	 will then continue to the conditional branch which will trip and enter the abort."
  	| nextPC postBranchPC targetBytecodePC branchDescriptor label counterReg fixup
  	  counterAddress countTripped unforwardArg unforwardRcvr argReg rcvrReg regMask |
  	<var: #fixup type: #'BytecodeFixup *'>
  	<var: #countTripped type: #'AbstractInstruction *'>
  	<var: #primDescriptor type: #'BytecodeDescriptor *'>
  	<var: #branchDescriptor type: #'BytecodeDescriptor *'>
  
  	((coInterpreter isOptimizedMethod: methodObj) or: [needsFrame not]) ifTrue: [ ^ super genSpecialSelectorEqualsEquals ].
  
  	regMask := 0.
  	
  	self extractMaybeBranchDescriptorInto: [ :descr :next :postBranch :target | 
  		branchDescriptor := descr. nextPC := next. postBranchPC := postBranch. targetBytecodePC := target ].
  	
  	unforwardRcvr := (objectRepresentation isUnannotatableConstant: (self ssValue: 1)) not.
  	unforwardArg := (objectRepresentation isUnannotatableConstant: self ssTop) not.
  	
  	"If an operand is an annotable constant, it may be forwarded, so we need to store it into a 
  	register so the forwarder check can jump back to the comparison after unforwarding the constant.
  	However, if one of the operand is an unnanotable constant, does not allocate a register for it 
  	(machine code will use operations on constants)."
  	self 
  		allocateEqualsEqualsRegistersArgNeedsReg: unforwardArg 
  		rcvrNeedsReg: unforwardRcvr 
  		into: [ :rcvr :arg | rcvrReg:= rcvr. argReg := arg ].
  		
  	argReg ifNotNil: [ regMask := self registerMaskFor: argReg ].
  	rcvrReg ifNotNil: [ regMask := regMask bitOr: (self registerMaskFor: rcvrReg) ].
  	
  	"Only interested in inlining if followed by a conditional branch."
  	(branchDescriptor isBranchTrue or: [branchDescriptor isBranchFalse]) ifFalse:
  		[^ self genEqualsEqualsNoBranchArgIsConstant: unforwardArg not rcvrIsConstant: unforwardRcvr not argReg: argReg rcvrReg: rcvrReg].
- 
- 	counterReg := self allocateRegNotConflictingWith: regMask. "Use this as the count reg, can't conflict with the registers for the arg and the receiver of #==."
- 	counterAddress := counters + ((self sizeof: #sqInt) * counterIndex).
- 	self flag: 'will need to use MoveAw32:R: if 64 bits'.
- 	self assert: objectMemory wordSize = CounterBytes.
- 	self MoveAw: counterAddress R: counterReg.
- 	self SubCq: 16r10000 R: counterReg. "Count executed"
- 	"If counter trips simply abort the inlined comparison and send continuing to the following
- 	 branch *without* writing back.  A double decrement will not trip the second time."
- 	countTripped := self JumpCarry: 0.
- 	self MoveR: counterReg Aw: counterAddress. "write back"
  	
+ 	self 
+ 		genExecutionCountLogicInto: [ :cReg :cAddress :countTripBranch | 
+ 			counterReg := cReg. 
+ 			counterAddress := cAddress. 
+ 			countTripped := countTripBranch ] 
+ 		counterRegNotConflictingWith: regMask.
+ 	
  	self assert: (unforwardArg or: [ unforwardRcvr ]).
  	
  	"If branching the stack must be flushed for the merge"
  	self ssFlushTo: simStackPtr - 2.
  	
  	label := self Label.
  	
  	self genEqualsEqualsComparisonArgIsConstant: unforwardArg not rcvrIsConstant: unforwardRcvr not argReg: argReg rcvrReg: rcvrReg.
  	
  	self ssPop: 2. "pop by 2 temporarily  for the fixups"
  	branchDescriptor isBranchTrue 
  		ifTrue: 
  			[ fixup := self ensureNonMergeFixupAt: postBranchPC - initialPC.
  			self JumpZero: (self ensureNonMergeFixupAt: targetBytecodePC - initialPC) asUnsignedInteger. ]
  		ifFalse: 
  			[ fixup := self ensureNonMergeFixupAt: targetBytecodePC - initialPC.
  			self JumpZero: (self ensureNonMergeFixupAt: postBranchPC - initialPC) asUnsignedInteger. ].
  	unforwardArg ifTrue: [ objectRepresentation genEnsureOopInRegNotForwarded: argReg scratchReg: TempReg jumpBackTo: label  ].
  	unforwardRcvr ifTrue: [ objectRepresentation genEnsureOopInRegNotForwarded: rcvrReg scratchReg: TempReg jumpBackTo: label ].
  	self ssPop: -2. 
  	
+ 	self genFallsThroughCountLogicCounterReg: counterReg counterAddress: counterAddress.
- 	"the jump has not been taken and forwarders have been followed."
- 	self SubCq: 1 R: counterReg. "Count untaken"
- 	self MoveR: counterReg Aw: counterAddress. "write back"
  	self Jump: fixup.
- 	
  	countTripped jmpTarget: self Label.
  	
  	"inlined version of #== ignoring the branchDescriptor if the counter trips to have normal state for the optimizer"
  	self genEqualsEqualsNoBranchArgIsConstant: unforwardArg not rcvrIsConstant: unforwardRcvr not argReg: argReg rcvrReg: rcvrReg.
  	^ 0!

Item was changed:
  ----- Method: StackToRegisterMappingCogit>>genSpecialSelectorEqualsEqualsWithForwarders (in category 'bytecode generators') -----
  genSpecialSelectorEqualsEqualsWithForwarders
  	| nextPC branchDescriptor unforwardRcvr argReg targetBytecodePC
  	unforwardArg  rcvrReg postBranchPC label fixup |
  	<var: #fixup type: #'BytecodeFixup *'>
  	<var: #primDescriptor type: #'BytecodeDescriptor *'>
  	<var: #branchDescriptor type: #'BytecodeDescriptor *'>
  	
  	self extractMaybeBranchDescriptorInto: [ :descr :next :postBranch :target | 
  		branchDescriptor := descr. nextPC := next. postBranchPC := postBranch. targetBytecodePC := target ].
  
  	"If an operand is an annotable constant, it may be forwarded, so we need to store it into a 
  	register so the forwarder check can jump back to the comparison after unforwarding the constant.
  	However, if one of the operand is an unnanotable constant, does not allocate a register for it 
  	(machine code will use operations on constants) and does not generate forwarder checks."
  	unforwardRcvr := (objectRepresentation isUnannotatableConstant: (self ssValue: 1)) not.
  	unforwardArg := (objectRepresentation isUnannotatableConstant: self ssTop) not.
  
  	self 
  		allocateEqualsEqualsRegistersArgNeedsReg: unforwardArg 
  		rcvrNeedsReg: unforwardRcvr 
  		into: [ :rcvr :arg | rcvrReg:= rcvr. argReg := arg ].
  
  	"If not followed by a branch, resolve to true or false."
  	(branchDescriptor isBranchTrue or: [branchDescriptor isBranchFalse]) ifFalse:
  		[^ self genEqualsEqualsNoBranchArgIsConstant: unforwardArg not rcvrIsConstant: unforwardRcvr not argReg: argReg rcvrReg: rcvrReg].
  	
  	"If branching the stack must be flushed for the merge"
  	self ssFlushTo: simStackPtr - 2.
  	
  	label := self Label.
  	self genEqualsEqualsComparisonArgIsConstant: unforwardArg not rcvrIsConstant: unforwardRcvr not argReg: argReg rcvrReg: rcvrReg.
  	self ssPop: 2.
  
  	"Further since there is a following conditional jump bytecode, define
  	 non-merge fixups and leave the cond bytecode to set the mergeness."
  	(self fixupAt: nextPC - initialPC) targetInstruction = 0
  		ifTrue: "The next instruction is dead.  we can skip it."
  			[deadCode := true.
  		 	 self ensureFixupAt: targetBytecodePC - initialPC.
  			 self ensureFixupAt: postBranchPC - initialPC]
  		ifFalse:
  			[self ssPushConstant: objectMemory trueObject]. "dummy value"
  
  	self assert: (unforwardArg or: [ unforwardRcvr ]).
  	branchDescriptor isBranchTrue 
  		ifTrue: 
  			[ fixup := self ensureNonMergeFixupAt: postBranchPC - initialPC.
  			self JumpZero:  (self ensureNonMergeFixupAt: targetBytecodePC - initialPC) asUnsignedInteger ]
  		ifFalse: "branchDescriptor is branchFalse"
  			[ fixup := self ensureNonMergeFixupAt: targetBytecodePC - initialPC.
  			self JumpZero: (self ensureNonMergeFixupAt: postBranchPC - initialPC) asUnsignedInteger ].
+ 		
  	"The forwarders checks need to jump back to the comparison (label) if a forwarder is found, else 
  	jump forward either to the next forwarder check or to the postBranch or branch target (fixup)."
  	unforwardArg ifTrue: 
  		[ unforwardRcvr
  			ifTrue: [ objectRepresentation genEnsureOopInRegNotForwarded: argReg scratchReg: TempReg jumpBackTo: label ]
  			ifFalse: [ objectRepresentation 
  				genEnsureOopInRegNotForwarded: argReg 
  				scratchReg: TempReg 
  				ifForwarder: label
  				ifNotForwarder: fixup ] ].
  	unforwardRcvr ifTrue: 
  		[ objectRepresentation 
  			genEnsureOopInRegNotForwarded: rcvrReg 
  			scratchReg: TempReg 
  			ifForwarder: label
  			ifNotForwarder: fixup ].
+ 		
+ 	"Not reached, execution flow have jumped to fixup"
+ 	
  	^0!

Item was changed:
  ----- Method: StackToRegisterMappingCogit>>genVanillaSpecialSelectorEqualsEquals (in category 'bytecode generators') -----
  genVanillaSpecialSelectorEqualsEquals
  	| nextPC postBranchPC targetBytecodePC branchDescriptor
  	  rcvrReg argReg argIsConstant rcvrIsConstant  |
  	<var: #primDescriptor type: #'BytecodeDescriptor *'>
  	<var: #branchDescriptor type: #'BytecodeDescriptor *'>
  	
  	self extractMaybeBranchDescriptorInto: [ :descr :next :postBranch :target | 
  		branchDescriptor := descr. nextPC := next. postBranchPC := postBranch. targetBytecodePC := target ].
  	
  	argIsConstant := self ssTop type = SSConstant.
  	"they can't be both constants because we do not have instructions manipulating two constants, 
  	if this is the case, which can happen due to annotable constants that can be moved in memory 
  	with become and therefore can't resolve #== at compilation time, still write the rcvr into a 
  	register as if it was not a constant. It's uncommon anyway."
  	rcvrIsConstant := argIsConstant not and: [(self ssValue: 1) type = SSConstant]. 
  	
  	self 
  		allocateEqualsEqualsRegistersArgNeedsReg: argIsConstant not 
  		rcvrNeedsReg: rcvrIsConstant not 
  		into: [ :rcvr :arg | rcvrReg:= rcvr. argReg := arg ].
  	
  	"If not followed by a branch, resolve to true or false."
  	(branchDescriptor isBranchTrue or: [branchDescriptor isBranchFalse]) ifFalse:
  		[ ^ self genEqualsEqualsNoBranchArgIsConstant: argIsConstant rcvrIsConstant: rcvrIsConstant argReg: argReg rcvrReg: rcvrReg].
  	
  	"If branching the stack must be flushed for the merge"
  	self ssFlushTo: simStackPtr - 2.
  	
  	self genEqualsEqualsComparisonArgIsConstant: argIsConstant rcvrIsConstant: rcvrIsConstant argReg: argReg rcvrReg: rcvrReg.
  	self ssPop: 2.
  
  	"Further since there is a following conditional jump bytecode, define
  	 non-merge fixups and leave the cond bytecode to set the mergeness."
  	(self fixupAt: nextPC - initialPC) targetInstruction = 0
  		ifTrue: "The next instruction is dead.  we can skip it."
  			[deadCode := true.
  		 	 self ensureFixupAt: targetBytecodePC - initialPC.
  			 self ensureFixupAt: postBranchPC - initialPC]
  		ifFalse:
  			[self ssPushConstant: objectMemory trueObject]. "dummy value"
+ 		
  	self gen: (branchDescriptor isBranchTrue ifTrue: [JumpZero] ifFalse: [JumpNonZero])
  		operand: (self ensureNonMergeFixupAt: targetBytecodePC - initialPC) asUnsignedInteger.
+ 
+ 	"If the branch is dead, then we can just fall through postBranchPC (only a nop in-between), else 
+ 	we need to jump over the code of the branch"
+ 	deadCode ifFalse: [self Jump: (self ensureNonMergeFixupAt: postBranchPC - initialPC)].
- 		deadCode ifFalse: [self Jump: (self ensureNonMergeFixupAt: postBranchPC - initialPC)].
  	^0!



More information about the Vm-dev mailing list