[Vm-dev] VM Maker: VMMaker.oscog-eem.208.mcz

commits at source.squeak.org commits at source.squeak.org
Fri Nov 2 17:55:46 UTC 2012


Eliot Miranda uploaded a new version of VMMaker to project VM Maker:
http://source.squeak.org/VMMaker/VMMaker.oscog-eem.208.mcz

==================== Summary ====================

Name: VMMaker.oscog-eem.208
Author: eem
Time: 2 November 2012, 10:53:23.296 am
UUID: 97c8f02b-309a-4c65-80eb-644a640ca60d
Ancestors: VMMaker.oscog-lw.207

Cogit: Call a spade a spade.  Rename bytecodePointer to bytecodePC.

=============== Diff against VMMaker.oscog-lw.207 ===============

Item was changed:
  CogClass subclass: #Cogit
+ 	instanceVariableNames: 'coInterpreter objectMemory objectRepresentation processor threadManager methodZone methodZoneBase codeBase minValidCallAddress lastNInstructions simulatedAddresses simulatedTrampolines simulatedVariableGetters simulatedVariableSetters printRegisters printInstructions compilationTrace clickConfirm breakPC breakBlock singleStep guardPageSize traceLinkedSends traceStores breakMethod methodObj initialPC endPC methodOrBlockNumArgs inBlock needsFrame hasYoungReferent primitiveIndex backEnd postCompileHook primInvokeLabel methodLabel stackCheckLabel blockEntryLabel blockEntryNoContextSwitch blockNoContextSwitchOffset stackOverflowCall sendMissCall missOffset entryPointMask checkedEntryAlignment cmEntryOffset entry cmNoCheckEntryOffset noCheckEntry dynSuperEntry dynSuperEntryAlignment cmDynSuperEntryOffset mnuCall interpretCall endCPICCase0 endCPICCase1 numPICCases firstCPICCaseOffset cPICCaseSize cPICEndSize closedPICSize openPICSize fixups abstractOpcodes annotations generatorTable primitiveGeneratorTable byte0 byte1 byte2 byte3 bytecodePC opcodeIndex numAbstractOpcodes annotationIndex blockStarts blockCount labelCounter cStackAlignment expectedSPAlignment expectedFPAlignment codeModified maxMethodBefore ceMethodAbortTrampoline cePICAbortTrampoline ceCheckForInterruptTrampoline ceCPICMissTrampoline ceStoreCheckTrampoline ceReturnToInterpreterTrampoline ceBaseFrameReturnTrampoline ceSendMustBeBooleanAddTrueTrampoline ceSendMustBeBooleanAddFalseTrampoline ceCannotResumeTrampoline ceClosureCopyTrampoline ceCreateNewArrayTrampoline ceEnterCogCodePopReceiverReg ceEnterCogCodePopReceiverAndClassRegs cePrimReturnEnterCogCode cePrimReturnEnterCogCodeProfiling ceNonLocalReturnTrampoline ceActiveContextTrampoline ceFetchContextInstVarTrampoline ceStoreContextInstVarTrampoline cePositive32BitIntegerTrampoline ceImplicitReceiverTrampoline ceExplicitReceiverTrampoline ceCaptureCStackPointers ceFlushICache ceCheckFeaturesFunction ceTraceLinkedSendTrampoline ceTraceBlockActivationTrampoline ceTraceStoreTrampoline ceGetSP sendTrampolines superSendTrampolines dynamicSuperSendTrampolines firstSend lastSend realCEEnterCogCodePopReceiverReg realCEEnterCogCodePopReceiverAndClassRegs trampolineTableIndex trampolineAddresses objectReferencesInRuntime runtimeObjectRefIndex cFramePointerInUse debugPrimCallStackOffset ceTryLockVMOwner ceUnlockVMOwner cogMethodSurrogateClass cogBlockMethodSurrogateClass'
- 	instanceVariableNames: 'coInterpreter objectMemory objectRepresentation processor threadManager methodZone methodZoneBase codeBase minValidCallAddress lastNInstructions simulatedAddresses simulatedTrampolines simulatedVariableGetters simulatedVariableSetters printRegisters printInstructions compilationTrace clickConfirm breakPC breakBlock singleStep guardPageSize traceLinkedSends traceStores breakMethod methodObj initialPC endPC methodOrBlockNumArgs inBlock needsFrame hasYoungReferent primitiveIndex backEnd postCompileHook primInvokeLabel methodLabel stackCheckLabel blockEntryLabel blockEntryNoContextSwitch blockNoContextSwitchOffset stackOverflowCall sendMissCall missOffset entryPointMask checkedEntryAlignment cmEntryOffset entry cmNoCheckEntryOffset noCheckEntry dynSuperEntry dynSuperEntryAlignment cmDynSuperEntryOffset mnuCall interpretCall endCPICCase0 endCPICCase1 numPICCases firstCPICCaseOffset cPICCaseSize cPICEndSize closedPICSize openPICSize fixups abstractOpcodes annotations generatorTable primitiveGeneratorTable byte0 byte1 byte2 byte3 bytecodePointer opcodeIndex numAbstractOpcodes annotationIndex blockStarts blockCount labelCounter cStackAlignment expectedSPAlignment expectedFPAlignment codeModified maxMethodBefore ceMethodAbortTrampoline cePICAbortTrampoline ceCheckForInterruptTrampoline ceCPICMissTrampoline ceStoreCheckTrampoline ceReturnToInterpreterTrampoline ceBaseFrameReturnTrampoline ceSendMustBeBooleanAddTrueTrampoline ceSendMustBeBooleanAddFalseTrampoline ceCannotResumeTrampoline ceClosureCopyTrampoline ceCreateNewArrayTrampoline ceEnterCogCodePopReceiverReg ceEnterCogCodePopReceiverAndClassRegs cePrimReturnEnterCogCode cePrimReturnEnterCogCodeProfiling ceNonLocalReturnTrampoline ceActiveContextTrampoline ceFetchContextInstVarTrampoline ceStoreContextInstVarTrampoline cePositive32BitIntegerTrampoline ceImplicitReceiverTrampoline ceExplicitReceiverTrampoline ceCaptureCStackPointers ceFlushICache ceCheckFeaturesFunction ceTraceLinkedSendTrampoline ceTraceBlockActivationTrampoline ceTraceStoreTrampoline ceGetSP sendTrampolines superSendTrampolines dynamicSuperSendTrampolines firstSend lastSend realCEEnterCogCodePopReceiverReg realCEEnterCogCodePopReceiverAndClassRegs trampolineTableIndex trampolineAddresses objectReferencesInRuntime runtimeObjectRefIndex cFramePointerInUse debugPrimCallStackOffset ceTryLockVMOwner ceUnlockVMOwner cogMethodSurrogateClass cogBlockMethodSurrogateClass'
  	classVariableNames: 'AnnotationConstantNames AnnotationShift BlockCreationBytecodeSize Debug DisplacementMask DisplacementX2N EagerInstructionDecoration EncounteredUnknownBytecode FirstAnnotation HasBytecodePC InsufficientCodeSpace IsAbsPCReference IsDisplacement IsDisplacementX2N IsNSSendCall IsObjectReference IsRelativeCall IsSendCall MapEnd MaxCompiledPrimitiveIndex MaxNegativeErrorCode MaxUnitDisplacement MaxUnreportableError MaxX2NDisplacement NotFullyInitialized NumObjRefsInRuntime NumSendTrampolines NumTrampolines ProcessorClass YoungSelectorInPIC'
  	poolDictionaries: 'CogMethodConstants CogRTLOpcodes VMBasicConstants VMObjectIndices VMStackFrameOffsets'
  	category: 'VMMaker-JIT'!
  Cogit class
  	instanceVariableNames: 'generatorTable primitiveTable'!
  
  !Cogit commentStamp: '<historical>' prior: 0!
  I am the code generator for the Cog VM.  My job is to produce machine code versions of methods for faster execution and to manage inline caches for faster send performance.
  
  coInterpreter <CoInterpreterSimulator>
  	the VM's interpreter with which I cooperate
  methodZoneManager <CogMethodZoneManager>
  	the manager of the machine code zone
  objectRepresentation <CogObjectRepresentation>
  	the object used to generate object accesses
  processor <BochsIA32Alien|?>
  	the simulator that executes the IA32/x86 machine code I generate when simulating execution in Smalltalk
  simulatedTrampolines <Dictionary of Integer -> MessageSend>
  	the dictionary mapping trap jump addresses to run-time routines used to warp from simulated machine code in to the Smalltalk run-time.
  simulatedVariableGetters <Dictionary of Integer -> MessageSend>
  	the dictionary mapping trap read addresses to variables in run-time objects used to allow simulated machine code to read variables in the Smalltalk run-time.
  simulatedVariableSetters <Dictionary of Integer -> MessageSend>
  	the dictionary mapping trap write addresses to variables in run-time objects used to allow simulated machine code to write variables in the Smalltalk run-time.
  printRegisters printInstructions clickConfirm <Boolean>
  	flags controlling debug printing and code simulation
  breakPC <Integer>
  	machine code pc breakpoint
  cFramePointer cStackPointer <Integer>
  	the variables representing the C stack & frame pointers, which must change on FFI callback and return
  selectorOop <sqInt>
  	the oop of the methodObj being compiled
  methodObj <sqInt>
  	the bytecode method being compiled
  initialPC endPC <Integer>
  	the start and end pcs of the methodObj being compiled
  methodOrBlockNumArgs <Integer>
  	argument count of current method or block being compiled
  needsFrame <Boolean>
  	whether methodObj or block needs a frame to execute
  primitiveIndex <Integer>
  	primitive index of current method being compiled
  methodLabel <CogAbstractOpcode>
  	label for the method header
  blockEntryLabel <CogAbstractOpcode>
  	label for the start of the block dispatch code
  stackOverflowCall <CogAbstractOpcode>
  	label for the call of ceStackOverflow in the method prolog
  sendMissCall <CogAbstractOpcode>
  	label for the call of ceSICMiss in the method prolog
  entryOffset <Integer>
  	offset of method entry code from start (header) of method
  entry <CogAbstractOpcode>
  	label for the first instruction of the method entry code
  noCheckEntryOffset <Integer>
  	offset of the start of a method proper (after the method entry code) from start (header) of method
  noCheckEntry <CogAbstractOpcode>
  	label for the first instruction of start of a method proper
  fixups <Array of <AbstractOpcode Label | nil>>
  	the labels for forward jumps that will be fixed up when reaching the relevant bytecode.  fixup shas one element per byte in methodObj's bytecode
  abstractOpcodes <Array of <AbstractOpcode>>
  	the code generated when compiling methodObj
  byte0 byte1 byte2 byte3 <Integer>
  	individual bytes of current bytecode being compiled in methodObj
  bytecodePointer <Integer>
  	bytecode pc (same as Smalltalk) of the current bytecode being compiled
  opcodeIndex <Integer>
  	the index of the next free entry in abstractOpcodes (this code is translated into C where OrderedCollection et al do not exist)
  numAbstractOpcodes <Integer>
  	the number of elements in abstractOpcocdes
  blockStarts <Array of <BlockStart>>
  	the starts of blocks in the current method
  blockCount
  	the index into blockStarts as they are being noted, and hence eventuakly teh total number of blocks in the current method
  labelCounter <Integer>
  	a nicety for numbering labels not needed in the production system but probably not expensive enough to worry about
  ceStackOverflowTrampoline <Integer>
  ceSend0ArgsTrampoline <Integer>
  ceSend1ArgsTrampoline <Integer>
  ceSend2ArgsTrampoline <Integer>
  ceSendNArgsTrampoline <Integer>
  ceSendSuper0ArgsTrampoline <Integer>
  ceSendSuper1ArgsTrampoline <Integer>
  ceSendSuper2ArgsTrampoline <Integer>
  ceSendSuperNArgsTrampoline <Integer>
  ceSICMissTrampoline <Integer>
  ceCPICMissTrampoline <Integer>
  ceStoreCheckTrampoline <Integer>
  ceReturnToInterpreterTrampoline <Integer>
  ceBaseFrameReturnTrampoline <Integer>
  ceSendMustBeBooleanTrampoline <Integer>
  ceClosureCopyTrampoline <Integer>
  	the various trampolines (system-call-like jumps from machine code to the run-time).
  	See Cogit>>generateTrampolines for the mapping from trampoline to run-time
  	routine and then read the run-time routine for a funcitonal description.
  ceEnterCogCodePopReceiverReg <Integer>
  	the enilopmart (jump from run-time to machine-code)
  methodZoneBase <Integer>
  !
  Cogit class
  	instanceVariableNames: 'generatorTable primitiveTable'!

Item was changed:
  ----- Method: Cogit>>Label (in category 'abstract instructions') -----
  Label
  	<inline: true>
  	<returnTypeC: #'AbstractInstruction *'>
+ 	^self gen: Label operand: (labelCounter := labelCounter + 1) operand: bytecodePC!
- 	^self gen: Label operand: (labelCounter := labelCounter + 1) operand: bytecodePointer!

Item was changed:
  ----- Method: Cogit>>addBlockStartAt:numArgs:numCopied:span: (in category 'compile abstract instructions') -----
+ addBlockStartAt: bcpc numArgs: numArgs numCopied: numCopied span: span
- addBlockStartAt: bytecodepc numArgs: numArgs numCopied: numCopied span: span
  	"Add a blockStart for an embedded block.  For a binary tree walk block dispatch
  	 blocks must be compiled in pc/depth-first order but are scanned in breadth-first
  	 order, so do an insertion sort (which of course is really a bubble sort because we
  	 have to move everything higher to make room)."
  	<returnTypeC: #'BlockStart *'>
  	| i blockStart |
  	<var: #blockStart type: #'BlockStart *'>
  	blockCount > 0
  		ifTrue:
  			[i := blockCount - 1.
  			 [blockStart := self addressOf: (blockStarts at: i).
+ 			   blockStart startpc > bcpc
- 			   blockStart startpc > bytecodepc
  			   and: [i > 0]] whileTrue:
  				[i := i - 1].
  			 blockCount to: i + 1 by: -1 do:
  				[:j|
  				blockStarts at: j put: (blockStarts at: j - 1)].
  			blockStart := self cCode: [self addressOf: (blockStarts at: i + 1)]
  								inSmalltalk: [blockStarts at: i + 1 put: CogBlockStart new]]
  		ifFalse:
  			[blockStart := self cCode: [self addressOf: (blockStarts at: blockCount)]
  								inSmalltalk: [blockStarts at: blockCount put: CogBlockStart new]].
  	blockCount := blockCount + 1.
  	blockStart
+ 		startpc: bcpc;
- 		startpc: bytecodepc;
  		numArgs: numArgs;
  		numCopied: numCopied;
  		stackCheckLabel: nil;
  		span: span.
  	^blockStart!

Item was changed:
  ----- Method: Cogit>>compileAbstractInstructionsFrom:through: (in category 'compile abstract instructions') -----
  compileAbstractInstructionsFrom: start through: end
  	"Loop over bytecodes, dispatching to the generator for each bytecode, handling fixups in due course."
  	| nextOpcodeIndex descriptor fixup result |
  	<var: #descriptor type: #'BytecodeDescriptor *'>
  	<var: #fixup type: #'BytecodeFixup *'>
+ 	bytecodePC := start.
+ 	[byte0 := objectMemory fetchByte: bytecodePC ofObject: methodObj.
- 	bytecodePointer := start.
- 	[byte0 := objectMemory fetchByte: bytecodePointer ofObject: methodObj.
  	 descriptor := self generatorAt: byte0.
  	 descriptor numBytes > 1 ifTrue:
+ 		[byte1 := objectMemory fetchByte: bytecodePC + 1 ofObject: methodObj.
- 		[byte1 := objectMemory fetchByte: bytecodePointer + 1 ofObject: methodObj.
  		 descriptor numBytes > 2 ifTrue:
+ 			[byte2 := objectMemory fetchByte: bytecodePC + 2 ofObject: methodObj.
- 			[byte2 := objectMemory fetchByte: bytecodePointer + 2 ofObject: methodObj.
  			 descriptor numBytes > 3 ifTrue:
+ 				[byte3 := objectMemory fetchByte: bytecodePC + 3 ofObject: methodObj.
- 				[byte3 := objectMemory fetchByte: bytecodePointer + 3 ofObject: methodObj.
  				 descriptor numBytes > 4 ifTrue:
  					[self notYetImplemented]]]].
  	 nextOpcodeIndex := opcodeIndex.
  	 result := self perform: descriptor generator.
+ 	 fixup := self fixupAt: bytecodePC - initialPC.
- 	 fixup := self fixupAt: bytecodePointer - initialPC.
  	 fixup targetInstruction ~= 0 ifTrue:
  		["There is a fixup for this bytecode.  It must point to the first generated
  		   instruction for this bytecode.  If there isn't one we need to add a label."
  		 opcodeIndex = nextOpcodeIndex ifTrue:
  			[self Label].
  		 fixup targetInstruction: (self abstractInstructionAt: nextOpcodeIndex)].
+ 	 bytecodePC := self nextBytecodePCFor: descriptor at: bytecodePC byte0: byte0 in: methodObj.
+ 	 result = 0 and: [bytecodePC <= end]] whileTrue.
- 	 bytecodePointer := self nextBytecodePCFor: descriptor at: bytecodePointer byte0: byte0 in: methodObj.
- 	 result = 0 and: [bytecodePointer <= end]] whileTrue.
  	self checkEnoughOpcodes.
  	^result!

Item was changed:
  ----- Method: Cogit>>gen: (in category 'compile abstract instructions') -----
  gen: opcode "<Integer>"
  	| abstractInstruction |
  	<inline: false>
  	<returnTypeC: #'AbstractInstruction *'>
  	<var: #abstractInstruction type: #'AbstractInstruction *'>
  	self assert: opcodeIndex < numAbstractOpcodes.
  	abstractInstruction := self abstractInstructionAt: opcodeIndex.
  	opcodeIndex := opcodeIndex + 1.
  	abstractInstruction opcode: opcode.
+ 	self cCode: '' inSmalltalk: [abstractInstruction bcpc: bytecodePC].
- 	self cCode: '' inSmalltalk: [abstractInstruction bcpc: bytecodePointer].
  	^abstractInstruction!

Item was changed:
  ----- Method: Cogit>>gen:operand: (in category 'compile abstract instructions') -----
  gen: opcode "<Integer>" operand: operand "<Integer|CogAbstractInstruction>"
  	| abstractInstruction |
  	<inline: false>
  	<returnTypeC: #'AbstractInstruction *'>
  	<var: #abstractInstruction type: #'AbstractInstruction *'>
  	self assert: opcodeIndex < numAbstractOpcodes.
  	abstractInstruction := self abstractInstructionAt: opcodeIndex.
  	opcodeIndex := opcodeIndex + 1.
  	abstractInstruction opcode: opcode.
  	abstractInstruction operands at: 0 put: operand.
+ 	self cCode: '' inSmalltalk: [abstractInstruction bcpc: bytecodePC].
- 	self cCode: '' inSmalltalk: [abstractInstruction bcpc: bytecodePointer].
  	^abstractInstruction!

Item was changed:
  ----- Method: Cogit>>gen:operand:operand: (in category 'compile abstract instructions') -----
  gen: opcode "<Integer>" operand: operandOne "<Integer|CogAbstractInstruction>"  operand: operandTwo "<Integer|CogAbstractInstruction>"
  	| abstractInstruction |
  	<inline: false>
  	<returnTypeC: #'AbstractInstruction *'>
  	<var: #abstractInstruction type: #'AbstractInstruction *'>
  	self assert: opcodeIndex < numAbstractOpcodes.
  	abstractInstruction := self abstractInstructionAt: opcodeIndex.
  	opcodeIndex := opcodeIndex + 1.
  	abstractInstruction opcode: opcode.
  	abstractInstruction operands at: 0 put: operandOne.
  	abstractInstruction operands at: 1 put: operandTwo.
+ 	self cCode: '' inSmalltalk: [abstractInstruction bcpc: bytecodePC].
- 	self cCode: '' inSmalltalk: [abstractInstruction bcpc: bytecodePointer].
  	^abstractInstruction!

Item was changed:
  ----- Method: Cogit>>gen:operand:operand:operand: (in category 'compile abstract instructions') -----
  gen: opcode "<Integer>" operand: operandOne "<Integer|CogAbstractInstruction>"  operand: operandTwo "<Integer|CogAbstractInstruction>" operand: operandThree "<Integer|CogAbstractInstruction>"
  	| abstractInstruction |
  	<inline: false>
  	<returnTypeC: #'AbstractInstruction *'>
  	<var: #abstractInstruction type: #'AbstractInstruction *'>
  	self assert: opcodeIndex < numAbstractOpcodes.
  	abstractInstruction := self abstractInstructionAt: opcodeIndex.
  	opcodeIndex := opcodeIndex + 1.
  	abstractInstruction opcode: opcode.
  	abstractInstruction operands at: 0 put: operandOne.
  	abstractInstruction operands at: 1 put: operandTwo.
  	abstractInstruction operands at: 2 put: operandThree.
+ 	self cCode: '' inSmalltalk: [abstractInstruction bcpc: bytecodePC].
- 	self cCode: '' inSmalltalk: [abstractInstruction bcpc: bytecodePointer].
  	^abstractInstruction!

Item was changed:
  ----- Method: SimpleStackBasedCogit>>genLongJumpIfFalse (in category 'bytecode generators') -----
  genLongJumpIfFalse
  	| distance target |
  	distance := self longForwardBranchDist: byte0 ance: byte1.
+ 	target := distance + 2 + bytecodePC.
- 	target := distance + 2 + bytecodePointer.
  	^self genJumpIf: objectMemory falseObject to: target!

Item was changed:
  ----- Method: SimpleStackBasedCogit>>genLongJumpIfTrue (in category 'bytecode generators') -----
  genLongJumpIfTrue
  	| distance target |
  	distance := self longForwardBranchDist: byte0 ance: byte1.
+ 	target := distance + 2 + bytecodePC.
- 	target := distance + 2 + bytecodePointer.
  	^self genJumpIf: objectMemory trueObject to: target!

Item was changed:
  ----- Method: SimpleStackBasedCogit>>genLongUnconditionalBackwardJump (in category 'bytecode generators') -----
  genLongUnconditionalBackwardJump
  	| distance targetpc |
  	distance := self longBranchDist: byte0 ance: byte1.
  	self assert: distance < 0.
+ 	targetpc := distance + 2 + bytecodePC.
- 	targetpc := distance + 2 + bytecodePointer.
  	^self genJumpBackTo: targetpc!

Item was changed:
  ----- Method: SimpleStackBasedCogit>>genLongUnconditionalForwardJump (in category 'bytecode generators') -----
  genLongUnconditionalForwardJump
  	| distance targetpc |
  	distance := self longBranchDist: byte0 ance: byte1.
  	self assert: distance >= 0.
+ 	targetpc := distance + 2 + bytecodePC.
- 	targetpc := distance + 2 + bytecodePointer.
  	^self genJumpTo: targetpc!

Item was changed:
  ----- Method: SimpleStackBasedCogit>>genPushClosureCopyCopiedValuesBytecode (in category 'bytecode generators') -----
  genPushClosureCopyCopiedValuesBytecode
  	"Block compilation.  At this point in the method create the block.  Note its start
  	 and defer generating code for it until after the method and any other preceeding
  	 blocks.  The block's actual code will be compiled later."
  	| numCopied |
  	self assert: needsFrame.
+ 	self addBlockStartAt: bytecodePC + 4
- 	self addBlockStartAt: bytecodePointer + 4
  		numArgs: (byte1 bitAnd: 16rF)
  		numCopied: (numCopied := byte1 >> 4)
  		span: (byte2 << 8) + byte3.
+ 	self MoveCq: (byte1 bitOr: bytecodePC + 5 << 8) R: SendNumArgsReg.
- 	self MoveCq: (byte1 bitOr: bytecodePointer + 5 << 8) R: SendNumArgsReg.
  	self CallRT: ceClosureCopyTrampoline.
  	numCopied > 0
  		ifTrue:
  			[numCopied > 1 ifTrue:
  				[self AddCq: (numCopied - 1) * BytesPerWord R: SPReg].
  			 self MoveR: ReceiverResultReg Mw: 0 r: SPReg]
  		ifFalse:
  			[self PushR: ReceiverResultReg].
  	^0!

Item was changed:
  ----- Method: SimpleStackBasedCogit>>genShortJumpIfFalse (in category 'bytecode generators') -----
  genShortJumpIfFalse
  	^self
  		genJumpIf: objectMemory falseObject
+ 		to: (self shortForwardBranchDistance: byte0) + 1 + bytecodePC!
- 		to: (self shortForwardBranchDistance: byte0) + 1 + bytecodePointer!

Item was changed:
  ----- Method: SimpleStackBasedCogit>>genShortUnconditionalJump (in category 'bytecode generators') -----
  genShortUnconditionalJump
  	| target |
+ 	target := (self shortForwardBranchDistance: byte0) + 1 + bytecodePC.
- 	target := (self shortForwardBranchDistance: byte0) + 1 + bytecodePointer.
  	^self genJumpTo: target!

Item was changed:
  ----- Method: SistaStackToRegisterMappingCogit>>genSpecialSelectorComparison (in category 'bytecode generators') -----
  genSpecialSelectorComparison
  	"Override to count inlined branches if followed by a conditional branch.
  	 We borrow the following conditional branch's counter and when about to
  	 inline the comparison we decrement the counter (without writing it back)
  	 and if it trips simply abort the inlining, falling back to the normal send which
  	 will then continue to the conditional branch which will trip and enter the abort."
  	| branchPC postBranchPC targetBytecodePC branchBytecode primDescriptor branchDescriptor
  	  rcvrIsInt argIsInt rcvrInt argInt result jumpNotSmallInts inlineCAB annotateInst counter countTripped |
  	<var: #primDescriptor type: #'BytecodeDescriptor *'>
  	<var: #branchDescriptor type: #'BytecodeDescriptor *'>
  	<var: #jumpNotSmallInts type: #'AbstractInstruction *'>
  	<var: #counter type: #'AbstractInstruction *'>
  	<var: #countTripped type: #'AbstractInstruction *'>
  	self ssFlushTo: simStackPtr - 2.
  	primDescriptor := self generatorAt: byte0.
  	argIsInt := self ssTop type = SSConstant
  				 and: [objectMemory isIntegerObject: (argInt := self ssTop constant)].
  	rcvrIsInt := (self ssValue: 1) type = SSConstant
  				 and: [objectMemory isIntegerObject: (rcvrInt := (self ssValue: 1) constant)].
  
  	(argIsInt and: [rcvrIsInt]) ifTrue:
  		[self cCode: '' inSmalltalk: "In Simulator ints are unsigned..."
  				[rcvrInt := objectMemory integerValueOf: rcvrInt.
  				argInt := objectMemory integerValueOf: argInt].
  		 primDescriptor opcode caseOf: {
  			[JumpLess]				-> [result := rcvrInt < argInt].
  			[JumpLessOrEqual]		-> [result := rcvrInt <= argInt].
  			[JumpGreater]			-> [result := rcvrInt > argInt].
  			[JumpGreaterOrEqual]	-> [result := rcvrInt >= argInt].
  			[JumpZero]				-> [result := rcvrInt = argInt].
  			[JumpNonZero]			-> [result := rcvrInt ~= argInt] }.
  		 "Must enter any annotatedConstants into the map"
  		 (self ssValue: 1) annotateUse ifTrue:
  			[self annotateBytecode: (self prevInstIsPCAnnotated
  											ifTrue: [self Nop]
  											ifFalse: [self Label])].
  		 self ssTop annotateUse ifTrue:
  			[self annotateBytecode: (self prevInstIsPCAnnotated
  											ifTrue: [self Nop]
  											ifFalse: [self Label])].
  		 "Must annotate the bytecode for correct pc mapping."
  		 self ssPop: 2.
  		 ^self ssPushAnnotatedConstant: (result
  											ifTrue: [objectMemory trueObject]
  											ifFalse: [objectMemory falseObject])].
  
+ 	branchPC := bytecodePC + primDescriptor numBytes.
- 	branchPC := bytecodePointer + primDescriptor numBytes.
  	branchBytecode := objectMemory fetchByte: branchPC ofObject: methodObj.
  	branchDescriptor := self generatorAt: branchBytecode.
  	"Only interested in inlining if followed by a conditional branch."
  	inlineCAB := branchDescriptor isBranchTrue or: [branchDescriptor isBranchFalse].
  	"Further, only interested in inlining = and ~= if there's a SmallInteger constant involved.
  	 The relational operators successfully statically predict SmallIntegers; the equality operators do not."
  	(inlineCAB and: [primDescriptor opcode = JumpZero or: [primDescriptor opcode = JumpNonZero]]) ifTrue:
  		[inlineCAB := argIsInt or: [rcvrIsInt]].
  	inlineCAB ifFalse:
  		[^self genSpecialSelectorSend].
  
  	targetBytecodePC := branchPC
  							+ branchDescriptor numBytes
  							+ (self spanFor: branchDescriptor at: branchPC byte0: branchBytecode in: methodObj).
  	postBranchPC := branchPC + branchDescriptor numBytes.
  	argIsInt
  		ifTrue:
  			[(self ssValue: 1) popToReg: ReceiverResultReg.
  			 annotateInst := self ssTop annotateUse.
  			 self ssPop: 2.
  			 self MoveR: ReceiverResultReg R: TempReg]
  		ifFalse:
  			[self marshallSendArguments: 1.
  			 self MoveR: Arg0Reg R: TempReg.
  			 rcvrIsInt ifFalse:
  				[objectRepresentation isSmallIntegerTagNonZero
  					ifTrue: [self AndR: ReceiverResultReg R: TempReg]
  					ifFalse: [self OrR: ReceiverResultReg R: TempReg]]].
  	jumpNotSmallInts := objectRepresentation genJumpNotSmallIntegerInScratchReg: TempReg.
  
  	self ssAllocateRequiredReg: SendNumArgsReg. "Use this as the count reg."
  	counter := self addressOf: (counters at: counterIndex).
  	self flag: 'will need to use MoveAw32:R: if 64 bits'.
  	self assert: BytesPerWord = CounterBytes.
  	counter addDependent: (self annotateAbsolutePCRef:
  		(self MoveAw: counter asUnsignedInteger R: SendNumArgsReg)).
  	self SubCq: 16r10000 R: SendNumArgsReg. "Count executed"
  	"If counter trips simply abort the inlined comparison and send continuing to the following
  	 branch *without* writing back.  A double decrement will not trip the second time."
  	countTripped := self JumpCarry: 0.
  	counter addDependent: (self annotateAbsolutePCRef:
  		(self MoveR: SendNumArgsReg Aw: counter asUnsignedInteger)). "write back"
  
  	argIsInt
  		ifTrue: [annotateInst
  					ifTrue: [self annotateBytecode: (self CmpCq: argInt R: ReceiverResultReg)]
  					ifFalse: [self CmpCq: argInt R: ReceiverResultReg]]
  		ifFalse: [self CmpR: Arg0Reg R: ReceiverResultReg].
  	"Cmp is weird/backwards so invert the comparison.  Further since there is a following conditional
  	 jump bytecode define non-merge fixups and leave the cond bytecode to set the mergeness."
  	self gen: (branchDescriptor isBranchTrue
  				ifTrue: [primDescriptor opcode]
  				ifFalse: [self inverseBranchFor: primDescriptor opcode])
  		operand: (self ensureNonMergeFixupAt: targetBytecodePC - initialPC) asUnsignedInteger.
  	self Jump: (self ensureNonMergeFixupAt: postBranchPC - initialPC).
  	countTripped jmpTarget: (jumpNotSmallInts jmpTarget: self Label).
  	argIsInt ifTrue:
  		[self MoveCq: argInt R: Arg0Reg].
  	^self genMarshalledSend: (coInterpreter specialSelector: byte0 - 176) numArgs: 1!

Item was changed:
  ----- Method: StackToRegisterMappingCogit>>compileAbstractInstructionsFrom:through: (in category 'compile abstract instructions') -----
  compileAbstractInstructionsFrom: start through: end
  	"Loop over bytecodes, dispatching to the generator for each bytecode, handling fixups in due course."
  	| nextOpcodeIndex descriptor fixup result |
  	<var: #descriptor type: #'BytecodeDescriptor *'>
  	<var: #fixup type: #'BytecodeFixup *'>
  	self traceSimStack.
+ 	bytecodePC := start.
- 	bytecodePointer := start.
  	descriptor := nil.
  	deadCode := false.
  	[self cCode: '' inSmalltalk:
+ 		[(debugBytecodePointers includes: bytecodePC) ifTrue: [self halt]].
+ 	fixup := self fixupAt: bytecodePC - initialPC.
- 		[(debugBytecodePointers includes: bytecodePointer) ifTrue: [self halt]].
- 	fixup := self fixupAt: bytecodePointer - initialPC.
  	fixup targetInstruction asUnsignedInteger > 0
  		ifTrue:
  			[deadCode := false.
  	 		 fixup targetInstruction asUnsignedInteger >= 2 ifTrue:
  				[self merge: fixup
  					afterContinuation: (descriptor notNil
  										and: [descriptor isUnconditionalBranch
  											or: [descriptor isReturn]]) not]]
  		ifFalse: "If there's no fixup following a return there's no jump to that code and it is dead."
  			[(descriptor notNil and: [descriptor isReturn]) ifTrue:
  				[deadCode := true]].
  	 self cCode: '' inSmalltalk:
  		[deadCode ifFalse:
  			[self assert: simStackPtr + (needsFrame ifTrue: [0] ifFalse: [1])
+ 						= (self debugStackPointerFor: bytecodePC)]].
+ 	 byte0 := objectMemory fetchByte: bytecodePC ofObject: methodObj.
- 						= (self debugStackPointerFor: bytecodePointer)]].
- 	 byte0 := objectMemory fetchByte: bytecodePointer ofObject: methodObj.
  	 descriptor := self generatorAt: byte0.
  	 descriptor numBytes > 1 ifTrue:
+ 		[byte1 := objectMemory fetchByte: bytecodePC + 1 ofObject: methodObj.
- 		[byte1 := objectMemory fetchByte: bytecodePointer + 1 ofObject: methodObj.
  		 descriptor numBytes > 2 ifTrue:
+ 			[byte2 := objectMemory fetchByte: bytecodePC + 2 ofObject: methodObj.
- 			[byte2 := objectMemory fetchByte: bytecodePointer + 2 ofObject: methodObj.
  			 descriptor numBytes > 3 ifTrue:
+ 				[byte3 := objectMemory fetchByte: bytecodePC + 3 ofObject: methodObj.
- 				[byte3 := objectMemory fetchByte: bytecodePointer + 3 ofObject: methodObj.
  				 descriptor numBytes > 4 ifTrue:
  					[self notYetImplemented]]]].
  	 nextOpcodeIndex := opcodeIndex.
  	 result := deadCode
  				ifTrue: "insert nops for dead code that is mapped so that bc to mc mapping is not many to one"
  					[(descriptor isMapped
  					  or: [inBlock and: [descriptor isMappedInBlock]]) ifTrue:
  						[self annotateBytecode: self Nop].
  						0]
  				ifFalse:
  					[self perform: descriptor generator].
  	 self traceDescriptor: descriptor; traceSimStack.
  	 (fixup targetInstruction asUnsignedInteger between: 1 and: 2) ifTrue:
  		["There is a fixup for this bytecode.  It must point to the first generated
  		   instruction for this bytecode.  If there isn't one we need to add a label."
  		 opcodeIndex = nextOpcodeIndex ifTrue:
  			[self Label].
  		 fixup targetInstruction: (self abstractInstructionAt: nextOpcodeIndex)].
+ 	 bytecodePC := self nextBytecodePCFor: descriptor at: bytecodePC byte0: byte0 in: methodObj.
+ 	 result = 0 and: [bytecodePC <= end]] whileTrue.
- 	 bytecodePointer := self nextBytecodePCFor: descriptor at: bytecodePointer byte0: byte0 in: methodObj.
- 	 result = 0 and: [bytecodePointer <= end]] whileTrue.
  	self checkEnoughOpcodes.
  	^result!

Item was changed:
  ----- Method: StackToRegisterMappingCogit>>genPushClosureCopyCopiedValuesBytecode (in category 'bytecode generators') -----
  genPushClosureCopyCopiedValuesBytecode
  	"Block compilation.  At this point in the method create the block.  Note its start
  	 and defer generating code for it until after the method and any other preceeding
  	 blocks.  The block's actual code will be compiled later."
  	| numCopied |
  	self assert: needsFrame.
+ 	self addBlockStartAt: bytecodePC + 4
- 	self addBlockStartAt: bytecodePointer + 4
  		numArgs: (byte1 bitAnd: 16rF)
  		numCopied: (numCopied := byte1 >> 4)
  		span: (byte2 << 8) + byte3.
  	numCopied > 0 ifTrue:
  		[self ssFlushTo: simStackPtr].
  	optStatus isReceiverResultRegLive: false.
  	self ssAllocateCallReg: SendNumArgsReg and: ReceiverResultReg.
+ 	self MoveCq: (byte1 bitOr: bytecodePC + 5 << 8) R: SendNumArgsReg.
- 	self MoveCq: (byte1 bitOr: bytecodePointer + 5 << 8) R: SendNumArgsReg.
  	self CallRT: ceClosureCopyTrampoline.
  	numCopied > 0 ifTrue:
  		[self AddCq: numCopied * BytesPerWord R: SPReg.
  		 self ssPop: numCopied].
  	^self ssPushRegister: ReceiverResultReg!

Item was changed:
  ----- Method: StackToRegisterMappingCogit>>genSpecialSelectorComparison (in category 'bytecode generators') -----
  genSpecialSelectorComparison
  	| branchPC postBranchPC targetBytecodePC branchBytecode primDescriptor branchDescriptor
  	  rcvrIsInt argIsInt rcvrInt argInt result jumpNotSmallInts inlineCAB annotateInst |
  	<var: #primDescriptor type: #'BytecodeDescriptor *'>
  	<var: #branchDescriptor type: #'BytecodeDescriptor *'>
  	<var: #jumpNotSmallInts type: #'AbstractInstruction *'>
  	self ssFlushTo: simStackPtr - 2.
  	primDescriptor := self generatorAt: byte0.
  	argIsInt := self ssTop type = SSConstant
  				 and: [objectMemory isIntegerObject: (argInt := self ssTop constant)].
  	rcvrIsInt := (self ssValue: 1) type = SSConstant
  				 and: [objectMemory isIntegerObject: (rcvrInt := (self ssValue: 1) constant)].
  
  	(argIsInt and: [rcvrIsInt]) ifTrue:
  		[self cCode: '' inSmalltalk: "In Simulator ints are unsigned..."
  				[rcvrInt := objectMemory integerValueOf: rcvrInt.
  				argInt := objectMemory integerValueOf: argInt].
  		 primDescriptor opcode caseOf: {
  			[JumpLess]				-> [result := rcvrInt < argInt].
  			[JumpLessOrEqual]		-> [result := rcvrInt <= argInt].
  			[JumpGreater]			-> [result := rcvrInt > argInt].
  			[JumpGreaterOrEqual]	-> [result := rcvrInt >= argInt].
  			[JumpZero]				-> [result := rcvrInt = argInt].
  			[JumpNonZero]			-> [result := rcvrInt ~= argInt] }.
  		 "Must enter any annotatedConstants into the map"
  		 (self ssValue: 1) annotateUse ifTrue:
  			[self annotateBytecode: (self prevInstIsPCAnnotated
  											ifTrue: [self Nop]
  											ifFalse: [self Label])].
  		 self ssTop annotateUse ifTrue:
  			[self annotateBytecode: (self prevInstIsPCAnnotated
  											ifTrue: [self Nop]
  											ifFalse: [self Label])].
  		 "Must annotate the bytecode for correct pc mapping."
  		 self ssPop: 2.
  		 ^self ssPushAnnotatedConstant: (result
  											ifTrue: [objectMemory trueObject]
  											ifFalse: [objectMemory falseObject])].
  
+ 	branchPC := bytecodePC + primDescriptor numBytes.
- 	branchPC := bytecodePointer + primDescriptor numBytes.
  	branchBytecode := objectMemory fetchByte: branchPC ofObject: methodObj.
  	branchDescriptor := self generatorAt: branchBytecode.
  	"Only interested in inlining if followed by a conditional branch."
  	inlineCAB := branchDescriptor isBranchTrue or: [branchDescriptor isBranchFalse].
  	"Further, only interested in inlining = and ~= if there's a SmallInteger constant involved.
  	 The relational operators successfully statically predict SmallIntegers; the equality operators do not."
  	(inlineCAB and: [primDescriptor opcode = JumpZero or: [primDescriptor opcode = JumpNonZero]]) ifTrue:
  		[inlineCAB := argIsInt or: [rcvrIsInt]].
  	inlineCAB ifFalse:
  		[^self genSpecialSelectorSend].
  
  	targetBytecodePC := branchPC
  							+ branchDescriptor numBytes
  							+ (self spanFor: branchDescriptor at: branchPC byte0: branchBytecode in: methodObj).
  	postBranchPC := branchPC + branchDescriptor numBytes.
  	argIsInt
  		ifTrue:
  			[(self ssValue: 1) popToReg: ReceiverResultReg.
  			 annotateInst := self ssTop annotateUse.
  			 self ssPop: 2.
  			 self MoveR: ReceiverResultReg R: TempReg]
  		ifFalse:
  			[self marshallSendArguments: 1.
  			 self MoveR: Arg0Reg R: TempReg.
  			 rcvrIsInt ifFalse:
  				[objectRepresentation isSmallIntegerTagNonZero
  					ifTrue: [self AndR: ReceiverResultReg R: TempReg]
  					ifFalse: [self OrR: ReceiverResultReg R: TempReg]]].
  	jumpNotSmallInts := objectRepresentation genJumpNotSmallIntegerInScratchReg: TempReg.
  	argIsInt
  		ifTrue: [annotateInst
  					ifTrue: [self annotateBytecode: (self CmpCq: argInt R: ReceiverResultReg)]
  					ifFalse: [self CmpCq: argInt R: ReceiverResultReg]]
  		ifFalse: [self CmpR: Arg0Reg R: ReceiverResultReg].
  	"Cmp is weird/backwards so invert the comparison.  Further since there is a following conditional
  	 jump bytecode define non-merge fixups and leave the cond bytecode to set the mergeness."
  	self gen: (branchDescriptor isBranchTrue
  				ifTrue: [primDescriptor opcode]
  				ifFalse: [self inverseBranchFor: primDescriptor opcode])
  		operand: (self ensureNonMergeFixupAt: targetBytecodePC - initialPC) asUnsignedInteger.
  	self Jump: (self ensureNonMergeFixupAt: postBranchPC - initialPC).
  	jumpNotSmallInts jmpTarget: self Label.
  	argIsInt ifTrue:
  		[self MoveCq: argInt R: Arg0Reg].
  	^self genMarshalledSend: (coInterpreter specialSelector: byte0 - 176) numArgs: 1!

Item was changed:
  ----- Method: StackToRegisterMappingCogit>>genSpecialSelectorEqualsEquals (in category 'bytecode generators') -----
  genSpecialSelectorEqualsEquals
  	| argReg rcvrReg nextPC postBranchPC targetBytecodePC branchBytecode primDescriptor branchDescriptor jumpEqual jumpNotEqual resultReg |
  	<var: #primDescriptor type: #'BytecodeDescriptor *'>
  	<var: #branchDescriptor type: #'BytecodeDescriptor *'>
  	<var: #jumpEqual type: #'AbstractInstruction *'>
  	<var: #jumpNotEqual type: #'AbstractInstruction *'>
  	self flag: 'rewrite this crap.'.
  	self ssPop: 2.
  	resultReg := self availableRegisterOrNil.
  	resultReg ifNil:
  		[(self numRegArgs > 1 and: [needsFrame not and: [methodOrBlockNumArgs = 2]]) ifTrue:
  			[self halt].
  		self ssAllocateRequiredReg: (resultReg := Arg1Reg)].
  	self ssPush: 2.
  	(self ssTop type = SSConstant
  	 and: [self ssTop spilled not]) "if spilled we must generate a real pop"
  		ifTrue:
  			[(self ssValue: 1) type = SSRegister
  				ifTrue: [rcvrReg := (self ssValue: 1) register]
  				ifFalse:
  					[(self ssValue: 1) popToReg: (rcvrReg := resultReg)].
  			(objectRepresentation shouldAnnotateObjectReference: self ssTop constant)
  				ifTrue: [self annotate: (self CmpCw: self ssTop constant R: rcvrReg)
  							objRef: self ssTop constant]
  				ifFalse: [self CmpCq: self ssTop constant R: rcvrReg].
  			self ssPop: 1]
  		ifFalse:
  			[argReg := self ssStorePop: true toPreferredReg: TempReg.
  			 rcvrReg := argReg = resultReg
  							ifTrue: [TempReg]
  							ifFalse: [resultReg].
  			self ssTop popToReg: rcvrReg.
  			self CmpR: argReg R: rcvrReg].
  	self ssPop: 1; ssPushRegister: resultReg.
  	primDescriptor := self generatorAt: byte0.
+ 	nextPC := bytecodePC + primDescriptor numBytes.
- 	nextPC := bytecodePointer + primDescriptor numBytes.
  	branchBytecode := objectMemory fetchByte: nextPC ofObject: methodObj.
  	branchDescriptor := self generatorAt: branchBytecode.
  	(branchDescriptor isBranchTrue
  	 or: [branchDescriptor isBranchFalse])
  		ifTrue:
  			[self ssFlushTo: simStackPtr - 1.
  			 targetBytecodePC := nextPC
  								+ branchDescriptor numBytes
  								+ (self spanFor: branchDescriptor at: nextPC byte0: branchBytecode in: methodObj).
  			 postBranchPC := nextPC + branchDescriptor numBytes.
  			 (self fixupAt: nextPC - initialPC) targetInstruction = 0 ifTrue: "The next instruction is dead.  we can skip it."
  				[deadCode := true.
  				 self ssPop: 1. "the conditional branch bytecodes pop the item tested from the stack."
  				 self ensureFixupAt: targetBytecodePC - initialPC.
  				 self ensureFixupAt: postBranchPC - initialPC].
  			 self gen: (branchDescriptor isBranchTrue
  						ifTrue: [JumpZero]
  						ifFalse: [JumpNonZero])
  				operand: (self ensureNonMergeFixupAt: targetBytecodePC - initialPC) asUnsignedInteger.
  			 self Jump: (self ensureNonMergeFixupAt: postBranchPC - initialPC)]
  		ifFalse:
  			[jumpNotEqual := self JumpNonZero: 0.
  			 self annotate: (self MoveCw: objectMemory trueObject R: resultReg)
  				objRef: objectMemory trueObject.
  			 jumpEqual := self Jump: 0.
  			 jumpNotEqual jmpTarget: (self annotate: (self MoveCw: objectMemory falseObject R: resultReg)
  											objRef: objectMemory falseObject).
  			 jumpEqual jmpTarget: self Label].
  	resultReg == ReceiverResultReg ifTrue:
  		[optStatus isReceiverResultRegLive: false].
  	^0!

Item was changed:
  ----- Method: StackToRegisterMappingCogit>>merge:afterContinuation: (in category 'simulation stack') -----
  merge: fixup afterContinuation: mergeWithContinuation
  	"Merge control flow at a fixup.  The fixup holds the simStackPtr at the jump to this target.
  	 See stackToRegisterMapping on the class side for a full description."
  	<var: #fixup type: #'BytecodeFixup *'>
  	self traceMerge: fixup.
  	"For now we don't try and preserve the optimization status through merges."
  	optStatus isReceiverResultRegLive: false.
  	"If this instruction follows a return or an unconditional branch then the
  	 current simStackPtr is irrelevant and we continue with that of the fixup."
  	mergeWithContinuation ifFalse:
  		[self assert: fixup targetInstruction asUnsignedInteger >= 2.  "Must have a valid simStackPtr"
  		 simStackPtr := fixup simStackPtr].
  	fixup targetInstruction asUnsignedInteger <= 2 ifTrue:
  		["This is either a forward or backward branch target.
  		  The stack must be flushed."
  		 self ssFlushTo: simStackPtr.
  		 fixup simStackPtr <= -2 ifTrue:
  			"This is the target of a backward branch.  It doesn't have a simStackPtr yet."
  			[fixup simStackPtr: simStackPtr].
  		 fixup targetInstruction: self Label].
  	self assert: simStackPtr >= fixup simStackPtr.
  	self cCode: '' inSmalltalk:
+ 		[self assert: fixup simStackPtr = (self debugStackPointerFor: bytecodePC)].
- 		[self assert: fixup simStackPtr = (self debugStackPointerFor: bytecodePointer)].
  	simStackPtr := fixup simStackPtr.
  	simSpillBase := methodOrBlockNumTemps.
  	"For now throw away all type information for values on the stack, but sometime consider
  	 the more sophisticated merge described in the class side stackToRegisterMapping."
  	methodOrBlockNumTemps to: simStackPtr do:
  		[:i|
  		(self simStackAt: i)
  			mergeAt: FoxMFReceiver - (i - methodOrBlockNumArgs + 1 * BytesPerOop)
  			from: FPReg]!

Item was changed:
  ----- Method: StackToRegisterMappingCogit>>ssPushAnnotatedConstant: (in category 'simulation stack') -----
  ssPushAnnotatedConstant: literal
  	self ssPush: 1.
  	simSpillBase > simStackPtr ifTrue:
  		[simSpillBase := simStackPtr max: 0].
  	self ssTop
  		type: SSConstant;
  		annotateUse: true;
  		spilled: false;
  		constant: literal;
+ 		bcptr: bytecodePC.
- 		bcptr: bytecodePointer.
  	^0!

Item was changed:
  ----- Method: StackToRegisterMappingCogit>>ssPushBase:offset: (in category 'simulation stack') -----
  ssPushBase: reg offset: offset
  	self ssPush: 1.
  	simSpillBase > simStackPtr ifTrue:
  		[simSpillBase := simStackPtr max: 0].
  	self ssTop
  		type: SSBaseOffset;
  		spilled: false;
  		annotateUse: false;
  		register: reg;
  		offset: offset;
+ 		bcptr: bytecodePC.
- 		bcptr: bytecodePointer.
  	^0!

Item was changed:
  ----- Method: StackToRegisterMappingCogit>>ssPushConstant: (in category 'simulation stack') -----
  ssPushConstant: literal
  	self ssPush: 1.
  	simSpillBase > simStackPtr ifTrue:
  		[simSpillBase := simStackPtr max: 0].
  	self ssTop
  		type: SSConstant;
  		spilled: false;
  		annotateUse: false;
  		constant: literal;
+ 		bcptr: bytecodePC.
- 		bcptr: bytecodePointer.
  	^0!

Item was changed:
  ----- Method: StackToRegisterMappingCogit>>ssPushDesc: (in category 'simulation stack') -----
  ssPushDesc: simStackEntry
  	<var: #simStackEntry type: #CogSimStackEntry>
  	self cCode:
  			[simStackEntry type = SSSpill ifTrue:
  				[simStackEntry type: SSBaseOffset].
  			simStackEntry
  				spilled: false;
  				annotateUse: false;
+ 				bcptr: bytecodePC.
- 				bcptr: bytecodePointer.
  			 simStack
  				at: (simStackPtr := simStackPtr + 1)
  				put: simStackEntry]
  		inSmalltalk:
  			[(simStack at: (simStackPtr := simStackPtr + 1))
  				copyFrom: simStackEntry;
  				type: (simStackEntry type = SSSpill
  						ifTrue: [SSBaseOffset]
  						ifFalse: [simStackEntry type]);
  				spilled: false;
  				annotateUse: false;
+ 				bcptr: bytecodePC].
- 				bcptr: bytecodePointer].
  	simSpillBase > simStackPtr ifTrue:
  		[simSpillBase := simStackPtr max: 0].
  	^0!

Item was changed:
  ----- Method: StackToRegisterMappingCogit>>ssPushRegister: (in category 'simulation stack') -----
  ssPushRegister: reg
  	self ssPush: 1.
  	simSpillBase > simStackPtr ifTrue:
  		[simSpillBase := simStackPtr max: 0].
  	self ssTop
  		type: SSRegister;
  		spilled: false;
  		annotateUse: false;
  		register: reg;
+ 		bcptr: bytecodePC.
- 		bcptr: bytecodePointer.
  	^0!

Item was changed:
  ----- Method: StackToRegisterMappingCogit>>traceDescriptor: (in category 'simulation only') -----
  traceDescriptor: descriptor
  	<cmacro: '(ign) 0'>
  	(compilationTrace anyMask: 2) ifTrue:
+ 		[coInterpreter transcript cr; print: bytecodePC; space; nextPutAll: descriptor generator; flush]!
- 		[coInterpreter transcript cr; print: bytecodePointer; space; nextPutAll: descriptor generator; flush]!

Item was changed:
  ----- Method: StackToRegisterMappingCogit>>traceFixup: (in category 'simulation only') -----
  traceFixup: fixup
  	<cmacro: '(ign) 0'>
  	| index |
  	(compilationTrace anyMask: 8) ifTrue:
  		[index := (fixups object identityIndexOf: fixup) - 1.
  		 coInterpreter transcript
  			ensureCr;
+ 			print: bytecodePC; nextPutAll: ' -> '; print: index; nextPut: $/; print: index + initialPC;
- 			print: bytecodePointer; nextPutAll: ' -> '; print: index; nextPut: $/; print: index + initialPC;
  			nextPut: $:; space.
  			fixup printStateOn: coInterpreter transcript.
  			coInterpreter transcript cr; flush]!

Item was changed:
  ----- Method: StackToRegisterMappingCogit>>traceSpill: (in category 'simulation only') -----
  traceSpill: simStackEntry
  	<cmacro: '(ign) 0'>
  	(compilationTrace anyMask: 2) ifTrue:
+ 		[coInterpreter transcript cr; print: bytecodePC; space; print: simStackEntry; flush]!
- 		[coInterpreter transcript cr; print: bytecodePointer; space; print: simStackEntry; flush]!



More information about the Vm-dev mailing list