[Vm-dev] VM Maker: VMMaker.oscog-eem.755.mcz

commits at source.squeak.org commits at source.squeak.org
Wed Jun 4 23:59:45 UTC 2014


Eliot Miranda uploaded a new version of VMMaker to project VM Maker:
http://source.squeak.org/VMMaker/VMMaker.oscog-eem.755.mcz

==================== Summary ====================

Name: VMMaker.oscog-eem.755
Author: eem
Time: 4 June 2014, 4:56:52.333 pm
UUID: 3557bde2-30d3-482c-89bf-4b9907ffa49b
Ancestors: VMMaker.oscog-eem.754

Spur:
Implement unforwarding in inlined machine code #==.
hence rewrite the /horrible/ StackToRegisterMappingCogit
>>genSpecialSelectorEqualsEquals.

Unforward method literals in cog:selector: to ensure that all
examinations of constants in the Cogit are of the actual objects.

Have StackToRegisterMappingCogit>>genPushReceiverBytecode
use ReceiverResultReg if it contains self.

Speed up isForwarded: test by using the fact that the
isForwarded class pun is a power of two.

Fix a slip in an assert in unlinkSolitaryFreeTreeNode:.

Comment out the checkFreeSpace check in copyToOldSpace:bytes:format:.

Make the assert check in Spur32BitCoMemoryManager>>
freeStart: more lenient to avoid bogus assert fails from
machine code allocations.

Simulator:
Bring in-image facade for spur up to date.

=============== Diff against VMMaker.oscog-eem.754 ===============

Item was added:
+ ----- Method: CogObjectRepresentation>>ensureNoForwardedLiteralsIn: (in category 'compilation') -----
+ ensureNoForwardedLiteralsIn: aMethodObj
+ 	"Ensure there are no forwarded literals in the argument.
+ 	 By default this is a noop.  Subclasses redefine as necessary."
+ 	<inline: true>!

Item was added:
+ ----- Method: CogObjectRepresentationForSpur>>ensureNoForwardedLiteralsIn: (in category 'initialization') -----
+ ensureNoForwardedLiteralsIn: aMethodObj
+ 	"Ensure there are no forwarded literals in the argument."
+ 	<inline: true>
+ 	objectMemory
+ 		followForwardedObjectFields: aMethodObj
+ 		toDepth: 0!

Item was changed:
  ----- Method: CogObjectRepresentationForSpur>>genEnsureObjInRegNotForwarded:scratchReg: (in category 'compile abstract instructions') -----
  genEnsureObjInRegNotForwarded: reg scratchReg: scratch
+ 	"Make sure that the object in reg is not forwarded.  This routine assumes the object will
+ 	 never be forwarded to an immediate, as it is used to unforward  literal variables (associations). 
+ 	 Use the fact that isForwardedObjectClassIndexPun is a power of two to save an instruction."
- 	"Make sure that the object in reg is not forwarded."
  	| loop ok |
  	<var: #ok type: #'AbstractInstruction *'>
  	<var: #loop type: #'AbstractInstruction *'>
  	self assert: reg ~= scratch.
  	loop := cogit Label.
+ 	"notionally
+ 		self genGetClassIndexOfNonImm: reg into: scratch.
+ 		cogit CmpCq: objectMemory isForwardedObjectClassIndexPun R: TempReg.
+ 	 but the following is an instruction shorter:"
+ 	cogit MoveMw: 0 r: reg R: scratch.
+ 	cogit
+ 		AndCq: objectMemory classIndexMask - objectMemory isForwardedObjectClassIndexPun
+ 		R: scratch.
- 	self genGetClassIndexOfNonImm: reg into: scratch.
- 	cogit CmpCq: objectMemory isForwardedObjectClassIndexPun
- 		R: TempReg.
  	ok := cogit JumpNonZero:  0.
  	self genLoadSlot: 0 sourceReg: reg destReg: reg.
  	cogit Jump: loop.
  	ok jmpTarget: cogit Label.
  	^0!

Item was added:
+ ----- Method: CogObjectRepresentationForSpur>>genEnsureOopInRegNotForwarded:scratchReg: (in category 'compile abstract instructions') -----
+ genEnsureOopInRegNotForwarded: reg scratchReg: scratch
+ 	"Make sure that the oop in reg is not forwarded."
+ 	| loop okImm okObj |
+ 	<var: #ok type: #'AbstractInstruction *'>
+ 	<var: #loop type: #'AbstractInstruction *'>
+ 	self assert: reg ~= scratch.
+ 	loop := cogit MoveR: reg R: scratch.
+ 	okImm := self genJumpImmediateInScratchReg: scratch.
+ 	"notionally
+ 		self genGetClassIndexOfNonImm: reg into: scratch.
+ 		cogit CmpCq: objectMemory isForwardedObjectClassIndexPun R: TempReg.
+ 	 but the following is an instruction shorter:"
+ 	cogit MoveMw: 0 r: reg R: scratch.
+ 	cogit
+ 		AndCq: objectMemory classIndexMask - objectMemory isForwardedObjectClassIndexPun
+ 		R: scratch.
+ 	okObj := cogit JumpNonZero:  0.
+ 	self genLoadSlot: 0 sourceReg: reg destReg: reg.
+ 	cogit Jump: loop.
+ 	okImm jmpTarget: (okObj jmpTarget: cogit Label).
+ 	^0!

Item was changed:
  ----- Method: Cogit>>cog:selector: (in category 'jit - api') -----
  cog: aMethodObj selector: aSelectorOop
  	"Attempt to produce a machine code method for the bytecode method
  	 object aMethodObj.  N.B. If there is no code memory available do *NOT*
  	 attempt to reclaim the method zone.  Certain clients (e.g. ceSICMiss:)
  	 depend on the zone remaining constant across method generation."
  	<api>
  	<returnTypeC: #'CogMethod *'>
  	| cogMethod |
  	<var: #cogMethod type: #'CogMethod *'>
  	self assert: ((coInterpreter methodHasCogMethod: aMethodObj) not
  				or: [(self noAssertMethodClassAssociationOf: aMethodObj) = objectMemory nilObject]).
  	"coInterpreter stringOf: aSelectorOop"
  	coInterpreter
  		compilationBreak: aSelectorOop
  		point: (objectMemory lengthOf: aSelectorOop).
  	aMethodObj = breakMethod ifTrue: [self halt: 'Compilation of breakMethod'].
  	self cppIf: NewspeakVM
  		ifTrue: [cogMethod := methodZone findPreviouslyCompiledVersionOf: aMethodObj with: aSelectorOop.
  				cogMethod ifNotNil:
  					[(coInterpreter methodHasCogMethod: aMethodObj) not ifTrue:
  						[self assert: (coInterpreter rawHeaderOf: aMethodObj) = cogMethod methodHeader.
  						 cogMethod methodObject: aMethodObj.
  						 coInterpreter rawHeaderOf: aMethodObj put: cogMethod asInteger].
  					^cogMethod]].
  	"If the generators for the alternate bytecode set are missing then interpret."
  	(coInterpreter methodUsesAlternateBytecodeSet: aMethodObj)
  		ifTrue:
  			[(self numElementsIn: generatorTable) <= 256 ifTrue:
  				[^nil].
  			 bytecodeSetOffset := 256]
  		ifFalse:
  			[bytecodeSetOffset := 0].
  	extA := extB := 0.
+ 	objectRepresentation ensureNoForwardedLiteralsIn: aMethodObj.
  	methodObj := aMethodObj.
  	cogMethod := self compileCogMethod: aSelectorOop.
  	(cogMethod asInteger between: MaxNegativeErrorCode and: -1) ifTrue:
  		[cogMethod asInteger = InsufficientCodeSpace ifTrue:
  			[coInterpreter callForCogCompiledCodeCompaction].
  		"Right now no errors should be reported, so nothing more to do."
  		"self reportError: (self cCoerceSimple: cogMethod to: #sqInt)."
  		 ^nil].
  	"self cCode: ''
  		inSmalltalk:
  			[coInterpreter printCogMethod: cogMethod.
  			 ""coInterpreter symbolicMethod: aMethodObj.""
  			 self assertValidMethodMap: cogMethod."
  			 "self disassembleMethod: cogMethod."
  			 "printInstructions := clickConfirm := true""]."
  	^cogMethod!

Item was added:
+ ----- Method: CurrentImageCoInterpreterFacadeForSpurObjectRepresentation>>classTableMajorIndexShift (in category 'accessing') -----
+ classTableMajorIndexShift
+ 	^objectMemory classTableMajorIndexShift!

Item was added:
+ ----- Method: CurrentImageCoInterpreterFacadeForSpurObjectRepresentation>>classTableMinorIndexMask (in category 'accessing') -----
+ classTableMinorIndexMask
+ 	^objectMemory classTableMinorIndexMask!

Item was added:
+ ----- Method: CurrentImageCoInterpreterFacadeForSpurObjectRepresentation>>followForwardedObjectFields:toDepth: (in category 'forwarding') -----
+ followForwardedObjectFields: methodObh toDepth: depth 
+ 	"This is a noop in the facade"!

Item was added:
+ ----- Method: CurrentImageCoInterpreterFacadeForSpurObjectRepresentation>>isForwardedObjectClassIndexPun (in category 'accessing') -----
+ isForwardedObjectClassIndexPun
+ 	^objectMemory isForwardedObjectClassIndexPun!

Item was added:
+ ----- Method: CurrentImageCoInterpreterFacadeForSpurObjectRepresentation>>shiftForWord (in category 'accessing') -----
+ shiftForWord
+ 	^objectMemory shiftForWord!

Item was changed:
  ----- Method: SistaStackToRegisterMappingCogit>>genSpecialSelectorEqualsEquals (in category 'bytecode generators') -----
  genSpecialSelectorEqualsEquals
  	"Override to count inlined branches if followed by a conditional branch.
  	 We borrow the following conditional branch's counter and when about to
  	 inline the comparison we decrement the counter (without writing it back)
  	 and if it trips simply abort the inlining, falling back to the normal send which
  	 will then continue to the conditional branch which will trip and enter the abort."
  	| nextPC postBranchPC targetBytecodePC primDescriptor branchDescriptor nExts
+ 	  counter countTripped unforwardArg unforwardRcvr |
- 	  counter countTripped |
  	<var: #primDescriptor type: #'BytecodeDescriptor *'>
  	<var: #branchDescriptor type: #'BytecodeDescriptor *'>
  	<var: #counter type: #'AbstractInstruction *'>
  	<var: #countTripped type: #'AbstractInstruction *'>
  	self ssFlushTo: simStackPtr - 2.
  	primDescriptor := self generatorAt: byte0.
  
  	nextPC := bytecodePC + primDescriptor numBytes.
  	nExts := 0.
  	[branchDescriptor := self generatorAt: (objectMemory fetchByte: nextPC ofObject: methodObj) + (byte0 bitAnd: 256).
  	 branchDescriptor isExtension] whileTrue:
  		[nExts := nExts + 1.
  		 nextPC := nextPC + branchDescriptor numBytes].
  	"Only interested in inlining if followed by a conditional branch."
  	(branchDescriptor isBranchTrue or: [branchDescriptor isBranchFalse]) ifFalse:
  		[^self genSpecialSelectorSend].
  
  	targetBytecodePC := nextPC
  							+ branchDescriptor numBytes
  							+ (self spanFor: branchDescriptor at: nextPC exts: nExts in: methodObj).
  	postBranchPC := nextPC + branchDescriptor numBytes.
+ 	unforwardRcvr := (self ssValue: 1) type ~= SSConstant
+ 						or: [objectRepresentation shouldAnnotateObjectReference: (self ssValue: 1) constant].
+ 	unforwardArg := self ssTop type ~= SSConstant
+ 						or: [objectRepresentation shouldAnnotateObjectReference: self ssTop constant].
  	self marshallSendArguments: 1.
  
  	self ssAllocateRequiredReg: SendNumArgsReg. "Use this as the count reg."
  	counter := self addressOf: (counters at: counterIndex).
  	self flag: 'will need to use MoveAw32:R: if 64 bits'.
  	self assert: BytesPerWord = CounterBytes.
  	counter addDependent: (self annotateAbsolutePCRef:
  		(self MoveAw: counter asUnsignedInteger R: SendNumArgsReg)).
  	self SubCq: 16r10000 R: SendNumArgsReg. "Count executed"
  	"If counter trips simply abort the inlined comparison and send continuing to the following
  	 branch *without* writing back.  A double decrement will not trip the second time."
  	countTripped := self JumpCarry: 0.
  	counter addDependent: (self annotateAbsolutePCRef:
  		(self MoveR: SendNumArgsReg Aw: counter asUnsignedInteger)). "write back"
+ 	unforwardRcvr ifTrue:
+ 		[objectRepresentation genEnsureOopInRegNotForwarded: ReceiverResultReg scratchReg: TempReg].
+ 	unforwardArg ifTrue:
+ 		[objectRepresentation genEnsureOopInRegNotForwarded: Arg0Reg scratchReg: TempReg].
- 
  	self CmpR: Arg0Reg R: ReceiverResultReg.
  	"Cmp is weird/backwards so invert the comparison.  Further since there is a following conditional
  	 jump bytecode define non-merge fixups and leave the cond bytecode to set the mergeness."
  	self gen: (branchDescriptor isBranchTrue ifTrue: [JumpZero] ifFalse: [JumpNonZero])
  		operand: (self ensureNonMergeFixupAt: targetBytecodePC - initialPC) asUnsignedInteger.
  	self SubCq: 1 R: SendNumArgsReg. "Count untaken"
  	counter addDependent: (self annotateAbsolutePCRef:
  		(self MoveR: SendNumArgsReg Aw: counter asUnsignedInteger)). "write back"
  	self Jump: (self ensureNonMergeFixupAt: postBranchPC - initialPC).
  	countTripped jmpTarget: self Label.
  	^self genMarshalledSend: (coInterpreter specialSelector: byte0 - self firstSpecialSelectorBytecodeOffset)
  		numArgs: 1!

Item was changed:
  ----- Method: Spur32BitCoMemoryManager>>freeStart: (in category 'cog jit support') -----
  freeStart: aValue
+ 	self assert: (aValue >= scavenger eden start and: [aValue < (scavengeThreshold + 1024)]).
+ 	self assert: (scavengeThreshold max: aValue) + coInterpreter interpreterAllocationReserveBytes <= scavenger eden limit.
- 	self assert: (aValue >= scavenger eden start and: [aValue < scavengeThreshold]).
- 	self assert: scavengeThreshold + coInterpreter interpreterAllocationReserveBytes <= scavenger eden limit.
  	^freeStart := aValue!

Item was changed:
  ----- Method: SpurGenerationScavenger>>copyToOldSpace:bytes:format: (in category 'scavenger') -----
  copyToOldSpace: survivor bytes: bytesInObject format: formatOfSurvivor
  	"Copy survivor to oldSpace.  Answer the new oop of the object."
  	<inline: true>
  	| nTenures startOfSurvivor newStart newOop |
  	self assert: (formatOfSurvivor = (manager formatOf: survivor)
  				and: [((manager isMarked: survivor) not or: [tenureCriterion = MarkOnTenure])
  				and: [(manager isPinned: survivor) not
  				and: [(manager isRemembered: survivor) not]]]).
  	nTenures := statTenures.
  	startOfSurvivor := manager startOfObject: survivor.
  	newStart := manager allocateOldSpaceChunkOfBytes: bytesInObject.
  	newStart ifNil:
  		[manager growOldSpaceByAtLeast: 0. "grow by growHeadroom"
  		 newStart := manager allocateOldSpaceChunkOfBytes: bytesInObject.
  		 newStart ifNil:
  			[self error: 'out of memory']].
+ 	"manager checkFreeSpace."
- 	manager checkFreeSpace.
  	manager mem: newStart asVoidPointer cp: startOfSurvivor asVoidPointer y: bytesInObject.
  	newOop := newStart + (survivor - startOfSurvivor).
  	(manager isAnyPointerFormat: formatOfSurvivor) ifTrue:
  		[self remember: newOop.
  		 manager setIsRememberedOf: newOop to: true].
  	tenureCriterion = MarkOnTenure ifTrue:
  		[manager setIsMarkedOf: newOop to: true].
  	statTenures := nTenures + 1.
  	^newOop!

Item was changed:
  ----- Method: SpurMemoryManager>>followForwardedObjectFields:toDepth: (in category 'forwarding') -----
  followForwardedObjectFields: objOop toDepth: depth
  	"Follow pointers in the object to depth.
  	 Answer if any forwarders were found.
  	 How to avoid cyclic structures?? A temproary mark bit?"
+ 	<api>
  	| oop found |
  	found := false.
  	self assert: ((self isPointers: objOop) or: [self isOopCompiledMethod: objOop]).
  	0 to: (self numPointerSlotsOf: objOop) - 1 do:
  		[:i|
  		 oop := self fetchPointer: i ofObject: objOop.
  		 (self isNonImmediate: oop) ifTrue:
  			[(self isForwarded: oop) ifTrue:
  				[found := true.
  				 oop := self followForwarded: oop.
  				 self storePointer: i ofObject: objOop withValue: oop].
  			(depth > 0
  			 and: [(self hasPointerFields: oop)
  			 and: [self followForwardedObjectFields: oop toDepth: depth - 1]]) ifTrue:
  				[found := true]]].
  	^found!

Item was changed:
  ----- Method: SpurMemoryManager>>isForwarded: (in category 'object testing') -----
  isForwarded: objOop
+ 	"Answer if objOop is that if a forwarder.  Take advantage of isForwardedObjectClassIndexPun
+ 	 being a power of two to generate a more efficient test than the straight-forward
+ 		(self classIndexOf: objOop) = self isForwardedObjectClassIndexPun
+ 	"
  	<api>
+ 	^(self longAt: objOop) noMask: self classIndexMask - self isForwardedObjectClassIndexPun!
- 	^(self classIndexOf: objOop) = self isForwardedObjectClassIndexPun!

Item was changed:
  ----- Method: SpurMemoryManager>>isForwardedObjectClassIndexPun (in category 'class table puns') -----
  isForwardedObjectClassIndexPun
+ 	"Answer the class index of a forwarder.  We choose 8 so as not to
+ 	 be confused with any immediate class (whose classIndex matches
+ 	 its instances tag pattern), and because it is a power of two, which
+ 	 allows us to generate a slightly slimmer test for isForwarded:."
  	<api>
+ 	^8!
- 	^8 "Not to be confused with that of any immediate class"!

Item was changed:
  ----- Method: SpurMemoryManager>>unlinkSolitaryFreeTreeNode: (in category 'free space') -----
  unlinkSolitaryFreeTreeNode: freeTreeNode
  	"Unlink a freeTreeNode.  Assumes the node has no list (null next link)."
  	| parent smaller larger |
+ 	self assert: (self fetchPointer: self freeChunkNextIndex ofFreeChunk: freeTreeNode) = 0.
- 	self assert: (self fetchPointer: self freeChunkNextIndex ofObject: freeTreeNode) = 0.
  
  	"case 1. interior node has one child, P = parent, N = node, S = subtree (mirrored for large vs small)
  			___				  ___
  			| P |				  | P |
  		    _/_				_/_
  		    | N |		=>		| S |
  		 _/_
  		 | S |
  
  	 case 2: interior node has two children, , P = parent, N = node, L = smaller, left subtree, R = larger, right subtree.
  	 add the left subtree to the bottom left of the right subtree (mirrored for large vs small) 
  			___				  ___
  			| P |				  | P |
  		    _/_				_/_
  		    | N |		=>		| R |
  		 _/_  _\_		    _/_
  		 | L | | R |		    | L |"
  
  	smaller := self fetchPointer: self freeChunkSmallerIndex ofFreeChunk: freeTreeNode.
  	larger := self fetchPointer: self freeChunkLargerIndex ofFreeChunk: freeTreeNode.
  	parent := self fetchPointer: self freeChunkParentIndex ofFreeChunk: freeTreeNode.
  	parent = 0
  		ifTrue: "no parent; stitch the subnodes back into the root"
  			[smaller = 0
  				ifTrue:
  					[larger ~= 0 ifTrue:
  						[self storePointer: self freeChunkParentIndex ofFreeChunk: larger withValue: 0].
  					 freeLists at: 0 put: larger]
  				ifFalse:
  					[self storePointer: self freeChunkParentIndex ofFreeChunk: smaller withValue: 0.
  					 freeLists at: 0 put: smaller.
  					 larger ~= 0 ifTrue:
  						[self addFreeSubTree: larger]]]
  		ifFalse: "parent; stitch back into appropriate side of parent."
  			[smaller = 0
  				ifTrue: [self storePointer: (freeTreeNode = (self fetchPointer: self freeChunkSmallerIndex ofFreeChunk: parent)
  											ifTrue: [self freeChunkSmallerIndex]
  											ifFalse: [self freeChunkLargerIndex])
  							ofFreeChunk: parent
  							withValue: larger.
  						larger ~= 0 ifTrue:
  							[self storePointer: self freeChunkParentIndex
  								ofFreeChunk: larger
  								withValue: parent]]
  				ifFalse:
  					[self storePointer: (freeTreeNode = (self fetchPointer: self freeChunkSmallerIndex ofFreeChunk: parent)
  											ifTrue: [self freeChunkSmallerIndex]
  											ifFalse: [self freeChunkLargerIndex])
  						ofFreeChunk: parent
  						withValue: smaller.
  					 self storePointer: self freeChunkParentIndex
  						ofFreeChunk: smaller
  						withValue: parent.
  					 larger ~= 0 ifTrue:
  						[self addFreeSubTree: larger]]]!

Item was changed:
  ----- Method: StackToRegisterMappingCogit>>genPushReceiverBytecode (in category 'bytecode generators') -----
  genPushReceiverBytecode
+ 	(optStatus isReceiverResultRegLive
+ 	  and: [optStatus ssEntry = (self addressOf: simSelf)]) ifTrue:
+ 		[^self ssPushRegister: ReceiverResultReg].
  	^self ssPushDesc: simSelf!

Item was changed:
  ----- Method: StackToRegisterMappingCogit>>genSpecialSelectorEqualsEquals (in category 'bytecode generators') -----
  genSpecialSelectorEqualsEquals
+ 	| nextPC postBranchPC targetBytecodePC primDescriptor branchDescriptor nExts
+ 	  unforwardArg unforwardRcvr jumpEqual jumpNotEqual rcvrReg argReg |
- 	| argReg rcvrReg nextPC postBranchPC targetBytecodePC nExts
- 	  primDescriptor branchDescriptor jumpEqual jumpNotEqual resultReg |
- 	<var: #primDescriptor type: #'BytecodeDescriptor *'>
- 	<var: #branchDescriptor type: #'BytecodeDescriptor *'>
  	<var: #jumpEqual type: #'AbstractInstruction *'>
  	<var: #jumpNotEqual type: #'AbstractInstruction *'>
+ 	<var: #primDescriptor type: #'BytecodeDescriptor *'>
+ 	<var: #branchDescriptor type: #'BytecodeDescriptor *'>
- 	self flag: 'rewrite this crap.'.
- 	self ssPop: 2.
- 	resultReg := self availableRegisterOrNil.
- 	resultReg ifNil:
- 		[(self numRegArgs > 1 and: [needsFrame not and: [methodOrBlockNumArgs = 2]]) ifTrue:
- 			[self halt].
- 		self ssAllocateRequiredReg: (resultReg := Arg1Reg)].
- 	self ssPush: 2.
- 	(self ssTop type = SSConstant
- 	 and: [self ssTop spilled not]) "if spilled we must generate a real pop"
- 		ifTrue:
- 			[(self ssValue: 1) type = SSRegister
- 				ifTrue: [rcvrReg := (self ssValue: 1) register]
- 				ifFalse:
- 					[(self ssValue: 1) popToReg: (rcvrReg := resultReg)].
- 			(objectRepresentation shouldAnnotateObjectReference: self ssTop constant)
- 				ifTrue: [self annotate: (self CmpCw: self ssTop constant R: rcvrReg)
- 							objRef: self ssTop constant]
- 				ifFalse: [self CmpCq: self ssTop constant R: rcvrReg].
- 			self ssPop: 1]
- 		ifFalse:
- 			[argReg := self ssStorePop: true toPreferredReg: TempReg.
- 			 rcvrReg := argReg = resultReg
- 							ifTrue: [TempReg]
- 							ifFalse: [resultReg].
- 			self ssTop popToReg: rcvrReg.
- 			self CmpR: argReg R: rcvrReg].
- 	self ssPop: 1; ssPushRegister: resultReg.
  	primDescriptor := self generatorAt: byte0.
+ 	"forwarders have been followed in cog:selector:"
+ 	(self ssTop type = SSConstant
+ 	 and: [(self ssValue: 1) type = SSConstant]) ifTrue:
+ 		[self assert: primDescriptor isMapped not.
+ 		 ^self ssPushConstant: (self ssTop constant = (self ssValue: 1) constant
+ 									ifTrue: [objectMemory trueObject]
+ 									ifFalse: [objectMemory falseObject])].
+ 
  	nextPC := bytecodePC + primDescriptor numBytes.
  	nExts := 0.
  	[branchDescriptor := self generatorAt: (objectMemory fetchByte: nextPC ofObject: methodObj) + (byte0 bitAnd: 256).
  	 branchDescriptor isExtension] whileTrue:
  		[nExts := nExts + 1.
  		 nextPC := nextPC + branchDescriptor numBytes].
+ 	"If branching the stack must be flushed for the merge"
+ 	(branchDescriptor isBranchTrue or: [branchDescriptor isBranchFalse]) ifTrue:
+ 		[self ssFlushTo: simStackPtr - 2].
+ 
+ 	unforwardRcvr := (self ssValue: 1) type ~= SSConstant
+ 						or: [objectRepresentation shouldAnnotateObjectReference: (self ssValue: 1) constant].
+ 	unforwardArg := self ssTop type ~= SSConstant
+ 						or: [objectRepresentation shouldAnnotateObjectReference: self ssTop constant].
+ 
+ 	"Don't use ReceiverResultReg for receiver to keep ReceiverResultReg live.
+ 	 Optimize e.g. rcvr == nil, the common case for ifNil: et al."
+ 	needsFrame
+ 		ifTrue: 
+ 			[unforwardArg ifTrue:
+ 				[self ssAllocateRequiredReg: (argReg := Arg0Reg) upThrough: simStackPtr - 1].
+ 			 self ssAllocateRequiredReg: (rcvrReg := Arg1Reg) upThrough: simStackPtr - 2]
+ 		ifFalse:
+ 			[unforwardArg ifTrue:
+ 				[argReg := self ssAllocatePreferredReg: ClassReg].
+ 			 rcvrReg := self ssAllocatePreferredReg: SendNumArgsReg].
+ 	unforwardArg
- 	(branchDescriptor isBranchTrue
- 	 or: [branchDescriptor isBranchFalse])
  		ifTrue:
+ 			[self ssTop popToReg: argReg.
+ 			 objectRepresentation genEnsureOopInRegNotForwarded: argReg scratchReg: TempReg.
+ 			 (self ssValue: 1) popToReg: rcvrReg.
+ 			 unforwardRcvr ifTrue:
+ 				[objectRepresentation genEnsureOopInRegNotForwarded: rcvrReg scratchReg: TempReg].
+ 			 self CmpR: argReg R: rcvrReg]
- 			[self ssFlushTo: simStackPtr - 1.
- 			 targetBytecodePC := nextPC
- 								+ branchDescriptor numBytes
- 								+ (self spanFor: branchDescriptor at: nextPC exts: nExts in: methodObj).
- 			 postBranchPC := nextPC + branchDescriptor numBytes.
- 			 (self fixupAt: nextPC - initialPC) targetInstruction = 0 ifTrue: "The next instruction is dead.  we can skip it."
- 				[deadCode := true.
- 				 self ssPop: 1. "the conditional branch bytecodes pop the item tested from the stack."
- 				 self ensureFixupAt: targetBytecodePC - initialPC.
- 				 self ensureFixupAt: postBranchPC - initialPC].
- 			 self gen: (branchDescriptor isBranchTrue
- 						ifTrue: [JumpZero]
- 						ifFalse: [JumpNonZero])
- 				operand: (self ensureNonMergeFixupAt: targetBytecodePC - initialPC) asUnsignedInteger.
- 			 self Jump: (self ensureNonMergeFixupAt: postBranchPC - initialPC)]
  		ifFalse:
+ 			[(self ssValue: 1) popToReg: rcvrReg.
+ 			 unforwardRcvr ifTrue:
+ 				[objectRepresentation genEnsureOopInRegNotForwarded: rcvrReg scratchReg: TempReg].
+ 			 self CmpCq: self ssTop constant R: rcvrReg].
+ 	self ssPop: 2.
+ 
+ 	"If not followed by a branch, resolve to true or false."
+ 	(branchDescriptor isBranchTrue or: [branchDescriptor isBranchFalse]) ifFalse:
+ 		[jumpNotEqual := self JumpNonZero: 0.
+ 		 self annotate: (self MoveCw: objectMemory trueObject R: rcvrReg)
+ 			objRef: objectMemory trueObject.
+ 		 jumpEqual := self Jump: 0.
+ 		 jumpNotEqual jmpTarget: (self annotate: (self MoveCw: objectMemory falseObject R: rcvrReg)
+ 										objRef: objectMemory falseObject).
+ 		 jumpEqual jmpTarget: self Label.
+ 		 self ssPushRegister: rcvrReg.
+ 		 ^0].
+ 
+ 	"Further since there is a following conditional jump bytecode, define
+ 	 non-merge fixups and leave the cond bytecode to set the mergeness."
+ 	targetBytecodePC := nextPC
+ 							+ branchDescriptor numBytes
+ 							+ (self spanFor: branchDescriptor at: nextPC exts: nExts in: methodObj).
+ 	postBranchPC := nextPC + branchDescriptor numBytes.
+ 	(self fixupAt: nextPC - initialPC) targetInstruction = 0
+ 		ifTrue: "The next instruction is dead.  we can skip it."
+ 			[deadCode := true.
+ 		 	 self ensureFixupAt: targetBytecodePC - initialPC.
+ 			 self ensureFixupAt: postBranchPC - initialPC]
+ 		ifFalse:
+ 			[self ssPushConstant: objectMemory trueObject]. "dummy value"
+ 	self gen: (branchDescriptor isBranchTrue ifTrue: [JumpZero] ifFalse: [JumpNonZero])
+ 		operand: (self ensureNonMergeFixupAt: targetBytecodePC - initialPC) asUnsignedInteger.
+ 	self Jump: (self ensureNonMergeFixupAt: postBranchPC - initialPC).
- 			[jumpNotEqual := self JumpNonZero: 0.
- 			 self annotate: (self MoveCw: objectMemory trueObject R: resultReg)
- 				objRef: objectMemory trueObject.
- 			 jumpEqual := self Jump: 0.
- 			 jumpNotEqual jmpTarget: (self annotate: (self MoveCw: objectMemory falseObject R: resultReg)
- 											objRef: objectMemory falseObject).
- 			 jumpEqual jmpTarget: self Label].
- 	resultReg == ReceiverResultReg ifTrue:
- 		[optStatus isReceiverResultRegLive: false].
  	^0!



More information about the Vm-dev mailing list