[Vm-dev] VM Maker: VMMaker.oscog-eem.3089.mcz

commits at source.squeak.org commits at source.squeak.org
Fri Oct 15 00:29:45 UTC 2021


Eliot Miranda uploaded a new version of VMMaker to project VM Maker:
http://source.squeak.org/VMMaker/VMMaker.oscog-eem.3089.mcz

==================== Summary ====================

Name: VMMaker.oscog-eem.3089
Author: eem
Time: 14 October 2021, 5:29:32.790185 pm
UUID: 977cad5a-ad86-483c-81e5-2fcd9ef9bd33
Ancestors: VMMaker.oscog-eem.3088

Spur Cogit:
Fix code generation bug on primitive failure with no accessor depth when running primitives on the Smalltalk stack (FastCPrimitive). On CISCs the old code offset the stack pointer incorrectly, resulting in a hard crash soon after.  Also allow compileInterpreterPrimitive to invoke compileOnStackExternalPrimitive:flags:.

Run primitiveSpurStringReplace and primitiveSpurFloatArrayAt[Put] on the Smalltalk stack; the latter with 2x stack alignment.

Fix a bad bug with computing accessor depths of clichés such as primitiveStringReplace:
			primitiveFoo
				objectMemory hasSpurMemoryManagerAPI
					ifTrue: [self primitiveFooSpur]
					ifFalse: [self primitiveFooV3]
The old code missed the connection and computed a -1 accessor depth.

Fix a slip in printOopShortInner:.

Nuke the useless minValidCallAddress

Slang: format the macrology for global interpreter variables, and the primitiveAccessorDepthTable, a little nicer.

=============== Diff against VMMaker.oscog-eem.3088 ===============

Item was changed:
  ----- Method: CCodeGenerator>>accessorDepthForMethod: (in category 'spur primitive compilation') -----
  accessorDepthForMethod: method
  	"Compute the depth the method traverses object structure, assuming it is a primitive.
  	 This is in support of Spur's lazy become.  A primitive may fail because it may encounter
  	 a forwarder.  The primitive failure code needs to know to what depth it must follow
  	  arguments to follow forwarders and, if any are found and followed, retry the primitive.
  	 This method determines that depth. It starts by collecting references to the stack and
  	 then follows these through assignments to variables and use of accessor methods
  	 such as fetchPointer:ofObject:. For example
  		| obj field  |
  		obj := self stackTop.
  		field := objectMemory fetchPointer: 1 ofObject: obj.
  		self storePointer: 1 ofObject: field withValue: (self stackValue: 1)
  	has depth 2, since field is accessed, and field is an element of obj.
  
  	The information is cached since it needs to be computed *before* inlining"
  	^accessorDepthCache
  		at: method smalltalkSelector
  		ifAbsentPut:
  			[beganInlining
  				ifTrue:
  					[(method export
  					 or: [vmClass notNil or: [vmClass primitiveTable includes: method smalltalkSelector]])
  						ifTrue: [-1]
  						ifFalse: [self error: 'it is too late to compute accessor depths!!']]
  				ifFalse:
  					 [((method definingClass includesSelector: method smalltalkSelector) ifTrue:
  							[(method definingClass >> method smalltalkSelector) pragmaAt: #accessorDepth:])
  						ifNil:
+ 							["Deal with the
+ 									primitiveFoo
+ 										objectMemory hasSpurMemoryManagerAPI
+ 											ifTrue: [self primitiveFooSpur]
+ 											ifFalse: [self primitiveFooV3]
+ 							  cliché"
+ 							method extractSpurPrimitiveSelector ifNotNil:
+ 								[:actualSelector| | subMethod |
+ 								(subMethod := self methodNamed: actualSelector) ifNil:
+ 									[subMethod := self compileToTMethodSelector: actualSelector in: method definingClass].
+ 								^self accessorDepthForMethod: subMethod].
+ 							 ((self
- 							[((self
  									accessorChainsForMethod: method
  									interpreterClass: (vmClass ifNil: [StackInterpreter]))
  								inject: 0
  								into: [:length :chain| length max: (self accessorDepthForChain: chain)]) - 1]
  						ifNotNil: [:pragma| pragma arguments first]]]!

Item was changed:
  ----- Method: CCodeGenerator>>arrayInitializerCalled:for:sizeString:type: (in category 'utilities') -----
  arrayInitializerCalled: varName for: array sizeString: sizeStringOrNil type: cType
  	"array is a literal array or a CArray on some array."
  	^String streamContents:
+ 		[:s| | sequence lastLine index newLine atNewline |
- 		[:s| | sequence lastLine index newLine allIntegers |
  		sequence := array isCollection ifTrue: [array] ifFalse: [array object].
- 		"this is to align -ve and +ve integers nicely in the primitiveAccessorDepthTable"
- 		allIntegers := sequence allSatisfy: [:element| element isInteger].
  		lastLine := index := 0.
  		newLine := [sequence size >= 20
  						ifTrue: [s cr; nextPutAll: '/*'; print: index; nextPutAll: '*/'; tab]
  						ifFalse: [s crtab: 2].
+ 					 lastLine := s position.
+ 					 atNewline := true].
- 					 lastLine := s position].
  		s	nextPutAll: cType;
  			space;
  			nextPutAll: varName;
  			nextPut: $[.
  		sizeStringOrNil ifNotNil: [s nextPutAll: sizeStringOrNil].
  		s nextPutAll: '] = '.
  		sequence isString
  			ifTrue: [s nextPutAll: (self cLiteralFor: sequence)]
  			ifFalse:
+ 				[| mixedSigns |
+ 				"this is to align -ve and +ve integers nicely in the primitiveAccessorDepthTable"
+ 				mixedSigns := (sequence allSatisfy: [:element| element isInteger])
+ 								and: [(sequence anySatisfy: [:element| element < 0])
+ 								and: [sequence anySatisfy: [:element| element > 0]]].
+ 				 s nextPut: ${.
- 				[s nextPut: ${.
  				 newLine value.
  				 sequence
  					do: [:element|
+ 						(mixedSigns and: [atNewline and: [element >= 0]]) ifTrue:
+ 							[s space].
+ 						(mixedSigns
- 						(allIntegers
  						 and: [element < 0
  						 and: [s peekLast = Character space]]) ifTrue:
  							[s skip: -1].
  						s nextPutAll: (self cLiteralFor: element). index := index + 1]
  					separatedBy:
+ 						[atNewline := false.
+ 						 s nextPut: $,.
- 						[s nextPut: $,.
  						 ((s position - lastLine) >= 76
  						 or: [(index \\ 20) = 0])
  							ifTrue: [newLine value]
  							ifFalse: [s space]].
  				 s crtab; nextPut: $}]]!

Item was changed:
  ----- Method: CCodeGeneratorGlobalStructure>>emitCVariablesOn: (in category 'C code generator') -----
  emitCVariablesOn: aStream
  	"Store the global variable declarations on the given stream.
  	 Break logic into vars for structure and vars for non-structure."
  	| structure nonstruct |
  
  	structure := WriteStream on: (String new: 32768).
  	nonstruct := WriteStream on: (String new: 32768).
  	aStream nextPutAll: '/*** Variables ***/'; cr.
  	structure
  		nextPutAll: '#if SQ_USE_GLOBAL_STRUCT'; cr;
  		nextPutAll: '# define _iss /* define in-struct static as void */'; cr;
  		nextPutAll: 'static struct foo {'; cr;
  		nextPutAll: '#else'; cr;
  		nextPutAll: '# define _iss static'; cr;
  		nextPutAll: '#endif'; cr.
  	(self segregateByGroupingSizeAndVisibility: self buildSortedVariablesCollection) do:
  		[ :var | | varString inStruct target |
  		target := (inStruct := self placeInStructure: (varString := var asString)) 
  					ifTrue: [structure]
  					ifFalse: [nonstruct].
  		(self variableDeclarationStringsForVariable: varString) do:
  			[:decl|
  			 decl first == $#
  				ifTrue:
  					[target nextPutAll: decl; cr]
  				ifFalse:
  					[self isGeneratingPluginCode
  						ifTrue:
  							[varString = 'interpreterProxy'
  								ifTrue: "quite special..."
  									[self preDeclareInterpreterProxyOn: target]
  								ifFalse: [target nextPutAll: 'static ']]
  						ifFalse:
  							[(vmClass mustBeGlobal: varString) ifFalse:
  								[target nextPutAll: (inStruct ifTrue: ['_iss '] ifFalse: ['static '])]].
  					target nextPutAll: decl; nextPut: $;; cr]]].
  	structure
  		nextPutAll: '#undef _iss'; cr;
  		nextPutAll: '#if SQ_USE_GLOBAL_STRUCT'; cr;
  		nextPutAll: ' } fum;'; cr;
+ 		nextPutAll: '# if SQ_USE_GLOBAL_STRUCT_REG';cr;
+ 		nextPutAll: '#	define DECL_MAYBE_SQ_GLOBAL_STRUCT /* using a global reg pointer */'; cr;
+ 		nextPutAll: '#	define DECL_MAYBE_VOLATILE_SQ_GLOBAL_STRUCT /* using a global reg pointer */'; cr;
+ 		nextPutAll: '# else';cr;
+ 		nextPutAll: '#	define DECL_MAYBE_SQ_GLOBAL_STRUCT register struct foo * foo = &fum;'; cr;
+ 		nextPutAll: '#	define DECL_MAYBE_VOLATILE_SQ_GLOBAL_STRUCT volatile register struct foo * foo = &fum;'; cr;
+ 		nextPutAll: '# endif';cr;
- 		nextPutAll: ' #if SQ_USE_GLOBAL_STRUCT_REG';cr;
- 		nextPutAll: '# define DECL_MAYBE_SQ_GLOBAL_STRUCT /* using a global reg pointer */'; cr;
- 		nextPutAll: '# define DECL_MAYBE_VOLATILE_SQ_GLOBAL_STRUCT /* using a global reg pointer */'; cr;
- 		nextPutAll:'#else';cr;
- 		nextPutAll: '# define DECL_MAYBE_SQ_GLOBAL_STRUCT register struct foo * foo = &fum;'; cr;
- 		nextPutAll: '# define DECL_MAYBE_VOLATILE_SQ_GLOBAL_STRUCT volatile register struct foo * foo = &fum;'; cr;
- 		nextPutAll: '#endif';cr;
  		nextPutAll: '# define GIV(interpreterInstVar) (foo->interpreterInstVar)'; cr;
  		nextPutAll: '#else'; cr;
  		nextPutAll: '# define DECL_MAYBE_SQ_GLOBAL_STRUCT /* oh, no mr bill!! */'; cr;
  		nextPutAll: '# define DECL_MAYBE_VOLATILE_SQ_GLOBAL_STRUCT /* oh no, mr bill!! */'; cr;
  		nextPutAll: '# define GIV(interpreterInstVar) interpreterInstVar'; cr;
  		nextPutAll: '#endif'; cr.
  
  	"if the machine needs the fum structure defining locally, do it now; global register users don't need that, but DO need some batshit insane C macro fudging in order to convert the define of USE_GLOBAL_STRUCT_REG into a simple string to use in the asm clause below. Sigh."
  	structure
  		nextPutAll: '#if SQ_USE_GLOBAL_STRUCT'; cr;
+ 		nextPutAll: '# if SQ_USE_GLOBAL_STRUCT_REG';cr;
+ 		nextPutAll: '#	define fooxstr(s) foostr(s)'; cr;
+ 		nextPutAll: '#	define foostr(s)  #s'; cr;
- 		nextPutAll: '#if SQ_USE_GLOBAL_STRUCT_REG';cr;
- 		nextPutAll: '#define fooxstr(s) foostr(s)'; cr;
- 		nextPutAll: '#define foostr(s)  #s'; cr;
  		nextPutAll: 'register struct foo * foo asm(fooxstr(USE_GLOBAL_STRUCT_REG));'; cr;
+ 		nextPutAll: '# else'; cr;
- 		nextPutAll: '#else'; cr;
  		nextPutAll: 'static struct foo * foo = &fum;'; cr;
+ 		nextPutAll: '# endif'; cr;
- 		nextPutAll: '#endif'; cr;
  		nextPutAll: '#endif'; cr.
  
  	aStream
  		nextPutAll: structure contents;
  		nextPutAll: nonstruct contents;
  		cr!

Item was changed:
  ----- Method: CoInterpreter>>primitivePropertyFlagsForSpur: (in category 'cog jit support') -----
  primitivePropertyFlagsForSpur: primIndex
  	<inline: true>
  	"Answer any special requirements of the given primitive.  Spur always needs to set
  	 primitiveFunctionPointer and newMethod so primitives can retry on failure due to forwarders."
  	| baseFlags |
+ 	self cCode: [] inSmalltalk: [#(	primitiveObjectAtPut primitiveCopyObject primitiveSpurStringReplace
+ 									primitiveSpurFloatArrayAt primitiveSpurFloatArrayAtPut
+ 									primitiveExternalCall primitiveCalloutToFFI)]. "For senders..."
- 	self cCode: [] inSmalltalk: [#(primitiveObjectAtPut primitiveExternalCall primitiveCalloutToFFI)]. "For senders..."
  	baseFlags := profileSemaphore = objectMemory nilObject
  					ifTrue: [0]
  					ifFalse: [PrimCallCollectsProfileSamples].
+ 
  	(primIndex = PrimNumberObjectAtPut
+ 	 or: [primIndex = PrimNumberCopyObject
+ 	 or: [primIndex = PrimNumberStringReplace]]) ifTrue:
- 	 or: [primIndex = PrimNumberCopyObject]) ifTrue:
  		[^baseFlags + PrimCallOnSmalltalkStack].
+ 	(primIndex = PrimNumberFloatArrayAt
+ 	 or: [primIndex = PrimNumberFloatArrayAtPut]) ifTrue:
+ 		[^baseFlags + PrimCallOnSmalltalkStack + PrimCallOnSmalltalkStackAlign2x].
+ 
  	baseFlags := baseFlags + PrimCallNeedsPrimitiveFunction + PrimCallNeedsNewMethod.
  
  	(self isCalloutPrimitiveIndex: primIndex) ifTrue: "For callbacks & module unloading"
  		[^baseFlags + PrimCallMayEndureCodeCompaction + PrimCallIsExternalCall].
  	(self isCodeCompactingPrimitiveIndex: primIndex) ifTrue: "For code reclamations"
  		[^baseFlags bitOr: PrimCallMayEndureCodeCompaction].
  
  	^baseFlags!

Item was changed:
  ----- Method: CogARMCompiler>>rewriteTransferAt:target: (in category 'inline cacheing') -----
  rewriteTransferAt: callSiteReturnAddress target: callTargetAddress
  	"Rewrite a call/jump instruction to call a different target.  This variant is used to link PICs
  	 in ceSendMiss et al, and to rewrite call/jumps in CPICs.
  	Answer the extent of
  	 the code change which is used to compute the range of the icache to flush."
  	<var: #callSiteReturnAddress type: #usqInt>
  	<var: #callTargetAddress type: #usqInt>
  	| callDistance instr |
  	"for debug - [cogit disassembleFrom: callSiteReturnAddress - 10 to: callSiteReturnAddress - 1]."
- 	(cogit initialized and: [callTargetAddress < cogit minCallAddress]) ifTrue:
- 		[self error: 'linking callsite to invalid address'].
  
  	callDistance := (callTargetAddress - (callSiteReturnAddress + 8 "pc offset"- 4 "return offset")) signedIntToLong.
  	self assert: (self isInImmediateJumpRange: callDistance). "we don't support long call updates, yet"
  
  	instr := self instructionBeforeAddress: callSiteReturnAddress.
  	self assert: ((self instructionIsB: instr) or: [self instructionIsBL: instr]).
  	
  	objectMemory longAt:  (self instructionAddressBefore: callSiteReturnAddress) put: ((instr bitAnd: 16rFF000000) bitOr: (callDistance // 4 bitAnd: 16rFFFFFF)).
  
  	self assert: (self callTargetFromReturnAddress: callSiteReturnAddress) = callTargetAddress.
  
  	^4!

Item was changed:
  ----- Method: CogARMv8Compiler>>rewriteInlineCacheAt:tag:target: (in category 'inline cacheing') -----
  rewriteInlineCacheAt: callSiteReturnAddress tag: cacheTag target: callTargetAddress
  	"Rewrite an inline cache to call a different target for a new tag.  This variant is used
  	 to link unlinked sends in ceSend:to:numArgs: et al.  Answer the extent of the code
  	 change which is used to compute the range of the icache to flush.
  	 N.B.  On 64-bit platforms the inline cache tag is only 32-bits wide, hence this code
  	 is very similar to that for ARM32 CogOutOfLineLiteralsARMCompiler."
  	<var: #callSiteReturnAddress type: #usqInt>
  	<var: #callTargetAddress type: #usqInt>
  	| call callDistance |
- 	(cogit initialized and: [callTargetAddress < cogit minCallAddress]) ifTrue:
- 		[self error: 'linking callsite to invalid address'].
  	callDistance := (callTargetAddress - (callSiteReturnAddress - 4 "return offset")) signedIntToLong.
  	self assert: (self isInImmediateJumpRange: callDistance). "we don't support long call updates here"
  	call := self bl: callDistance.
  	cogit
  		codeLong32At: (self instructionAddressBefore: callSiteReturnAddress) put: call;
  		codeLong32At: (self pcRelativeAddressAt: callSiteReturnAddress - 8) put: cacheTag signedIntToLong.
  	self assert: (self inlineCacheTagAt: callSiteReturnAddress) signedIntFromLong = cacheTag.
  	"self cCode: ''
  		inSmalltalk: [cogit disassembleFrom: callSiteReturnAddress - 8 to: (self pcRelativeAddressAt: callSiteReturnAddress - 8)]."
  	^4!

Item was changed:
  ----- Method: CogIA32Compiler>>rewriteCallAt:target: (in category 'inline cacheing') -----
  rewriteCallAt: callSiteReturnAddress target: callTargetAddress
  	"Rewrite a call instruction to call a different target.  This variant is used to link PICs
  	 in ceSendMiss et al, and to rewrite cached primitive calls.   Answer the extent of
  	 the code change which is used to compute the range of the icache to flush."
  	<var: #callSiteReturnAddress type: #usqInt>
  	<var: #callTargetAddress type: #usqInt>
  	| callDistance |
  	"self cCode: ''
  		inSmalltalk: [cogit disassembleFrom: callSiteReturnAddress - 10 to: callSiteReturnAddress - 1]."
- 	(cogit initialized and: [callTargetAddress < cogit minCallAddress]) ifTrue:
- 		[self error: 'linking callsite to invalid address'].
  	callDistance := (callTargetAddress - callSiteReturnAddress) signedIntToLong.
  	objectMemory
  		byteAt: callSiteReturnAddress - 1 put: (callDistance >> 24 bitAnd: 16rFF);
  		byteAt: callSiteReturnAddress - 2 put: (callDistance >> 16 bitAnd: 16rFF);
  		byteAt: callSiteReturnAddress - 3 put: (callDistance >>   8 bitAnd: 16rFF);
  		byteAt: callSiteReturnAddress - 4 put: (callDistance            bitAnd: 16rFF).
  	self assert: (self callTargetFromReturnAddress: callSiteReturnAddress) = callTargetAddress.
  	"self cCode: ''
  		inSmalltalk: [cogit disassembleFrom: callSiteReturnAddress - 10 to: callSiteReturnAddress - 1]."
  	^5!

Item was changed:
  ----- Method: CogIA32Compiler>>rewriteInlineCacheAt:tag:target: (in category 'inline cacheing') -----
  rewriteInlineCacheAt: callSiteReturnAddress tag: cacheTag target: callTargetAddress
  	"Rewrite an inline cache to call a different target for a new tag.  This variant is used
  	 to link unlinked sends in ceSend:to:numArgs: et al.  Answer the extent of the code
  	 change which is used to compute the range of the icache to flush."
  	<var: #callSiteReturnAddress type: #usqInt>
  	<var: #callTargetAddress type: #usqInt>
  	| callDistance |
  	"self cCode: ''
  		inSmalltalk: [cogit disassembleFrom: callSiteReturnAddress - 10 to: callSiteReturnAddress - 1]."
- 	(cogit initialized and: [callTargetAddress < cogit minCallAddress]) ifTrue:
- 		[self error: 'linking callsite to invalid address'].
  	callDistance := (callTargetAddress - callSiteReturnAddress) signedIntToLong.
  	objectMemory
  		byteAt: callSiteReturnAddress - 1 put: (callDistance >> 24 bitAnd: 16rFF);
  		byteAt: callSiteReturnAddress - 2 put: (callDistance >> 16 bitAnd: 16rFF);
  		byteAt: callSiteReturnAddress - 3 put: (callDistance >>   8 bitAnd: 16rFF);
  		byteAt: callSiteReturnAddress - 4 put: (callDistance            bitAnd: 16rFF);
  		byteAt: callSiteReturnAddress - 6 put: (cacheTag >> 24 bitAnd: 16rFF);
  		byteAt: callSiteReturnAddress - 7 put: (cacheTag >> 16 bitAnd: 16rFF);
  		byteAt: callSiteReturnAddress - 8 put: (cacheTag >>   8 bitAnd: 16rFF);
  		byteAt: callSiteReturnAddress - 9 put: (cacheTag            bitAnd: 16rFF).
  	self assert: (self callTargetFromReturnAddress: callSiteReturnAddress) = callTargetAddress.
  	"self cCode: ''
  		inSmalltalk: [cogit disassembleFrom: callSiteReturnAddress - 10 to: callSiteReturnAddress - 1]."
  	^10!

Item was changed:
  ----- Method: CogInLineLiteralsARMCompiler>>rewriteInlineCacheAt:tag:target: (in category 'inline cacheing') -----
  rewriteInlineCacheAt: callSiteReturnAddress tag: cacheTag target: callTargetAddress
  	"Rewrite an inline cache to call a different target for a new tag.  This variant is used
  	 to link unlinked sends in ceSend:to:numArgs: et al.  Answer the extent of the code
  	 change which is used to compute the range of the icache to flush."
  	<var: #callSiteReturnAddress type: #usqInt>
  	<var: #callTargetAddress type: #usqInt>
  	| call callDistance |
- 	(cogit initialized and: [callTargetAddress < cogit minCallAddress]) ifTrue:
- 		[self error: 'linking callsite to invalid address'].
  	callDistance := (callTargetAddress - (callSiteReturnAddress + 8 "pc offset"- 4 "return offset")) signedIntToLong.
  	self assert: (self isInImmediateJumpRange: callDistance). "we don't support long call updates here"
  	call := self bl: callDistance.
  	objectMemory longAt: (self instructionAddressBefore: callSiteReturnAddress ) put: call.
  	self insert32BitOperand: cacheTag into4InstructionsPreceding: (self instructionAddressBefore: callSiteReturnAddress ).
  	self assert: (self callTargetFromReturnAddress: callSiteReturnAddress) = callTargetAddress.
  	self assert: (self extract32BitOperandFrom4InstructionsPreceding: (self instructionAddressBefore: callSiteReturnAddress )) = cacheTag.
  	"self cCode: ''
  		inSmalltalk: [cogit disassembleFrom: callSiteReturnAddress - 20 to: callSiteReturnAddress - 1]."
  	^20!

Item was changed:
  ----- Method: CogOutOfLineLiteralsARMCompiler>>rewriteInlineCacheAt:tag:target: (in category 'inline cacheing') -----
  rewriteInlineCacheAt: callSiteReturnAddress tag: cacheTag target: callTargetAddress
  	"Rewrite an inline cache to call a different target for a new tag.  This variant is used
  	 to link unlinked sends in ceSend:to:numArgs: et al.  Answer the extent of the code
  	 change which is used to compute the range of the icache to flush."
  	<var: #callSiteReturnAddress type: #usqInt>
  	<var: #callTargetAddress type: #usqInt>
  	| call callDistance |
- 	(cogit initialized and: [callTargetAddress < cogit minCallAddress]) ifTrue:
- 		[self error: 'linking callsite to invalid address'].
  	callDistance := (callTargetAddress - (callSiteReturnAddress + 8 "pc offset"- 4 "return offset")) signedIntToLong.
  	self assert: (self isInImmediateJumpRange: callDistance). "we don't support long call updates here"
  	call := self bl: callDistance.
  	objectMemory
  		longAt: (self instructionAddressBefore: callSiteReturnAddress ) put: call;
  		longAt: (self pcRelativeAddressAt: callSiteReturnAddress - 8) put: cacheTag.
  	self assert: (self inlineCacheTagAt: callSiteReturnAddress) = cacheTag.
  	"self cCode: ''
  		inSmalltalk: [cogit disassembleFrom: callSiteReturnAddress - 8 to: (self pcRelativeAddressAt: callSiteReturnAddress - 8)]."
  	^4!

Item was changed:
  ----- Method: CogX64Compiler>>rewriteCallAt:target: (in category 'inline cacheing') -----
  rewriteCallAt: callSiteReturnAddress target: callTargetAddress
  	"Rewrite a call instruction to call a different target.  This variant is used to link PICs
  	 in ceSendMiss et al, and to rewrite cached primitive calls.   Answer the extent of
  	 the code change which is used to compute the range of the icache to flush."
  	<var: #callSiteReturnAddress type: #usqInt>
  	<var: #callTargetAddress type: #usqInt>
  	| callDistance |
  	"self cCode: ''
  		inSmalltalk: [cogit disassembleFrom: callSiteReturnAddress - 10 to: callSiteReturnAddress - 1]."
- 	(cogit initialized and: [callTargetAddress < cogit minCallAddress]) ifTrue:
- 		[self error: 'linking callsite to invalid address'].
  	callDistance := (callTargetAddress - callSiteReturnAddress) signedIntToLong.
  	objectMemory
  		byteAt: callSiteReturnAddress - 1 put: (callDistance >> 24 bitAnd: 16rFF);
  		byteAt: callSiteReturnAddress - 2 put: (callDistance >> 16 bitAnd: 16rFF);
  		byteAt: callSiteReturnAddress - 3 put: (callDistance >>   8 bitAnd: 16rFF);
  		byteAt: callSiteReturnAddress - 4 put: (callDistance            bitAnd: 16rFF).
  	self assert: (self callTargetFromReturnAddress: callSiteReturnAddress) = callTargetAddress.
  	"self cCode: ''
  		inSmalltalk: [cogit disassembleFrom: callSiteReturnAddress - 10 to: callSiteReturnAddress - 1]."
  	^5!

Item was changed:
  ----- Method: CogX64Compiler>>rewriteInlineCacheAt:tag:target: (in category 'inline cacheing') -----
  rewriteInlineCacheAt: callSiteReturnAddress tag: cacheTag target: callTargetAddress
  	"Rewrite an inline cache to call a different target for a new tag.  This variant is used
  	 to link unlinked sends in ceSend:to:numArgs: et al.  Answer the extent of the code
  	 change which is used to compute the range of the icache to flush.
  	 N.B.  On 64-bit platforms the inline cache tag is only 32-bits wide, hence this code
  	 is identical to that for the IA32."
  	<var: #callSiteReturnAddress type: #usqInt>
  	<var: #callTargetAddress type: #usqInt>
  	| callDistance |
  	"self cCode: ''
  		inSmalltalk: [cogit disassembleFrom: callSiteReturnAddress - 12 to: callSiteReturnAddress - 1]."
- 	(cogit initialized and: [callTargetAddress < cogit minCallAddress]) ifTrue:
- 		[self error: 'linking callsite to invalid address'].
  	callDistance := (callTargetAddress - callSiteReturnAddress) signedIntToLong.
  	objectMemory
  		byteAt: callSiteReturnAddress - 1 put: (callDistance >> 24 bitAnd: 16rFF);
  		byteAt: callSiteReturnAddress - 2 put: (callDistance >> 16 bitAnd: 16rFF);
  		byteAt: callSiteReturnAddress - 3 put: (callDistance >>   8 bitAnd: 16rFF);
  		byteAt: callSiteReturnAddress - 4 put: (callDistance            bitAnd: 16rFF);
  		byteAt: callSiteReturnAddress - 6 put: (cacheTag >> 24 bitAnd: 16rFF);
  		byteAt: callSiteReturnAddress - 7 put: (cacheTag >> 16 bitAnd: 16rFF);
  		byteAt: callSiteReturnAddress - 8 put: (cacheTag >>   8 bitAnd: 16rFF);
  		byteAt: callSiteReturnAddress - 9 put: (cacheTag            bitAnd: 16rFF).
  	self assert: (self callTargetFromReturnAddress: callSiteReturnAddress) = callTargetAddress.
  	"self cCode: ''
  		inSmalltalk: [cogit disassembleFrom: callSiteReturnAddress - 12 to: callSiteReturnAddress - 1]."
  	^12!

Item was changed:
  CogClass subclass: #Cogit
(excessive size, no diff calculated)

Item was changed:
  ----- Method: Cogit>>initializeCodeZoneFrom:upTo: (in category 'initialization') -----
  initializeCodeZoneFrom: startAddress upTo: endAddress
  	<api>
  	self initializeBackend.
  	self sqMakeMemoryExecutableFrom: startAddress
  		To: endAddress
  		CodeToDataDelta: (self cppIf: #DUAL_MAPPED_CODE_ZONE
  								ifTrue: [self addressOf: codeToDataDelta put: [:v| codeToDataDelta := v]]
  								ifFalse: [nil]).
  	codeBase := methodZoneBase := startAddress.
  	backEnd stopsFrom: startAddress to: endAddress - 1.
  	self cCode: '' inSmalltalk:
  		[self initializeProcessor.
  		 backEnd stopsFrom: 0 to: guardPageSize - 1.
  		 backEnd has64BitPerformanceCounter ifTrue:
  			[self initializeSimulationIOHighResClockForProfiling]].
- 	minValidCallAddress := (codeBase min: coInterpreter interpretAddress)
- 								min: coInterpreter primitiveFailAddress.
  	methodZone manageFrom: methodZoneBase to: endAddress.
  	self assertValidDualZone.
  	backEnd detectFeatures.
  	self maybeGenerateCacheFlush.
  	self generateVMOwnerLockFunctions.
  	self genGetLeafCallStackPointers.
  	self generateStackPointerCapture.
  	self generateTrampolines.
  	self computeEntryOffsets.
  	self computeFullBlockEntryOffsets.
  	self generateClosedPICPrototype.
  	self alignMethodZoneBase.
  
  	"None of the above is executed beyond ceCheckFeatures, so a bulk flush now is the leanest thing to do."
  	backEnd flushICacheFrom: startAddress to: methodZoneBase asUnsignedInteger.
  	self maybeFlushWritableZoneFrom: startAddress to: methodZoneBase asUnsignedInteger.
  	"Repeat so that now the methodZone ignores the generated run-time."
  	methodZone manageFrom: methodZoneBase to: endAddress.
  	"N.B. this is assumed to be the last thing done in initialization; see Cogit>>initialized.
  	 This is done only to compute openPICSize; the generated code is discarded."
  	self generateOpenPICPrototype!

Item was removed:
- ----- Method: Cogit>>minCallAddress (in category 'accessing') -----
- minCallAddress
- 	<cmacro: '() minValidCallAddress'>
- 	^minValidCallAddress!

Item was changed:
  ----- Method: SimpleStackBasedCogit>>compileInterpreterPrimitive (in category 'primitive generators') -----
  compileInterpreterPrimitive
  	<inline: true>
  	| primitiveRoutine flags |
  	<var: #primitiveRoutine declareC: 'void (*primitiveRoutine)(void)'>
  	primitiveRoutine := coInterpreter
  							functionPointerForCompiledMethod: methodObj
  							primitiveIndex: primitiveIndex
  							primitivePropertyFlagsInto: (self addressOf: flags put: [:val| flags := val]).
+ 	(objectRepresentation hasSpurMemoryManagerAPI
+ 	 and: [flags anyMask: PrimCallOnSmalltalkStack]) ifTrue:
+ 		[^self compileOnStackExternalPrimitive: primitiveRoutine flags: flags].
- 	self deny: (flags anyMask: PrimCallOnSmalltalkStack).
  	^self compileInterpreterPrimitive: primitiveRoutine flags: flags!

Item was changed:
  ----- Method: SimpleStackBasedCogit>>compileOnStackExternalPrimitive:flags: (in category 'primitive generators') -----
  compileOnStackExternalPrimitive: primitiveRoutine flags: flags
  	"Compile a fast call of a C primitive using the current stack page, avoiding the stack switch except on failure.
  	 This convention still uses stackPointer and argumentCount to access operands.  Push all operands to the stack,
  	 assign stackPointer, argumentCount, and zero primFailCode.  Make the call (saving a LinkReg if required).
  	 Test for failure and return.  On failure on Spur, if there is an accessor depth, assign framePointer and newMethod,
  	 do the stack switch, call checkForAndFollowForwardedPrimitiveState, and loop back if forwarders are found.
  	 Fall through to frame build."
  	<option: #SpurObjectMemory>
  	<var: #primitiveRoutine declareC: 'void (*primitiveRoutine)(void)'>
+ 	| calleeSavedRegisterMask linkRegSaveRegister spRegSaveRegister jmpFail retry continueAfterProfileSample jumpToTakeSample |
- 	| calleeSavedRegisterMask linkRegSaveRegister spRegSaveRegister jmp retry continueAfterProfileSample jumpToTakeSample |
  	self assert: (objectRepresentation hasSpurMemoryManagerAPI and: [flags anyMask: PrimCallOnSmalltalkStack]).
  	self deny: (backEnd hasVarBaseRegister
  				and: [self register: VarBaseReg isInMask: ABICallerSavedRegisterMask]).
  
  	(coInterpreter recordFastCCallPrimTraceForMethod: methodObj) ifTrue:
  		[self genFastPrimTraceUsing: ClassReg and: SendNumArgsReg].
  
  	"Clear the primFailCode and set argumentCount"
  	self MoveCq: 0 R: TempReg.
  	self MoveR: TempReg Aw: coInterpreter primFailCodeAddress.
  	methodOrBlockNumArgs ~= 0 ifTrue:
  		[self AddCq: methodOrBlockNumArgs R: TempReg]. "As small or smaller than move on most archs"
  	self MoveR: TempReg Aw: coInterpreter argumentCountAddress.
  	self genExternalizeStackPointerForFastPrimitiveCall.
  	"We may need to save LinkReg and/or SPReg, and given the stack machinations
  	  it is much easier to save them in callee saved registers than on the stack itself."
  	calleeSavedRegisterMask := ABICalleeSavedRegisterMask bitClear: (self registerMaskFor: ClassReg).
  	backEnd hasLinkRegister ifTrue:
  		[linkRegSaveRegister := self availableRegisterOrNoneIn: calleeSavedRegisterMask.
  		 self deny: linkRegSaveRegister = NoReg.
  		 self MoveR: LinkReg R: linkRegSaveRegister.
  		 calleeSavedRegisterMask := calleeSavedRegisterMask bitClear: (self registerMaskFor: linkRegSaveRegister)].
  	spRegSaveRegister := NoReg.
  	(SPReg ~= NativeSPReg
  	 and: [(self isCalleeSavedReg: SPReg) not]) ifTrue:
  		[spRegSaveRegister := self availableRegisterOrNoneIn: calleeSavedRegisterMask.
  		 self deny: spRegSaveRegister = NoReg.
  		 self MoveR: SPReg R: spRegSaveRegister].
  	retry := self Label.
  	(flags anyMask: PrimCallOnSmalltalkStackAlign2x)
  		ifTrue: [self AndCq: (objectMemory wordSize * 2 - 1) bitInvert R: SPReg R: NativeSPReg]
  		ifFalse:
  			[SPReg ~= NativeSPReg ifTrue:
  				[backEnd genLoadNativeSPRegWithAlignedSPReg]].
  	backEnd genMarshallNArgs: 0 arg: 0 arg: 0 arg: 0 arg: 0.
  	self CallFullRT: primitiveRoutine asInteger.
  	backEnd genRemoveNArgsFromStack: 0.
+ 	"test primFailCode and jump to failure sequence if non-zero"
  	self MoveAw: coInterpreter primFailCodeAddress R: TempReg.
  	spRegSaveRegister ~= NoReg ifTrue:
  		[self MoveR: spRegSaveRegister R: SPReg].
  	self CmpCq: 0 R: TempReg.
+ 	jmpFail := self JumpNonZero: 0.
- 	jmp := self JumpNonZero: 0.
  	"Remember to restore the native stack pointer to point to the C stack,
  	 otherwise the Smalltalk frames will get overwritten on an interrupt."
  	SPReg ~= NativeSPReg ifTrue:
  		[backEnd genLoadCStackPointer].
  	"placing the test here attributes the tick to the primitive plus any checkForAndFollowForwardedPrimitiveState
  	 scanning, but attributes all of a failing primitive to the current method (in ceStackOverflow: on frame build)."
  	(backEnd has64BitPerformanceCounter
  	 and: [flags anyMask: PrimCallCollectsProfileSamples]) ifTrue:
  		[jumpToTakeSample := self genCheckForProfileTimerTick: (self registerMaskFor: NoReg)].
  	"At this point the primitive has cut back stackPointer to point to the result."
  	continueAfterProfileSample :=
  	self MoveAw: coInterpreter stackPointerAddress R: TempReg.
  	"get result and restore retpc"
  	backEnd hasLinkRegister
  		ifTrue:
  			[self MoveMw: 0 r: TempReg R: ReceiverResultReg;
  				AddCq: objectMemory wordSize R: TempReg R: SPReg;
  				MoveR: linkRegSaveRegister R: LinkReg]
  		ifFalse:
  			[| retpcOffset |
  			"The original retpc is (argumentCount + 1) words below stackPointer."
  			 retpcOffset := (methodOrBlockNumArgs + 1 * objectMemory wordSize) negated.
  			 self MoveMw: retpcOffset r: TempReg R: ClassReg; "get retpc"
  				MoveR: TempReg R: SPReg;
  			 	MoveMw: 0 r: TempReg R: ReceiverResultReg;
  				MoveR: ClassReg Mw: 0 r: TempReg "put it back on stack for the return..."].
  	self RetN: 0.
  
  	(backEnd has64BitPerformanceCounter
  	 and: [flags anyMask: PrimCallCollectsProfileSamples]) ifTrue:
  		[jumpToTakeSample jmpTarget: self Label.
  		 self genTakeProfileSample.
  		 self Jump: continueAfterProfileSample].
  
+ 	"primitive failure. if there is an accessor depth, scan and retry on failure (but what if faling for out of memory?)"
+ 	jmpFail jmpTarget: self Label.
- 	jmp jmpTarget: self Label.
  	(coInterpreter accessorDepthForPrimitiveMethod: methodObj) >= 0
  		ifTrue:
  			[| skip |
  			 "Given that following primitive state to the accessor depth is recursive, we're asking for
  			  trouble if we run the fixup on the Smalltalk stack page.  Run it on the full C stack instead.
  			 This won't be a performance issue since primitive failure should be very rare."
  			self MoveR: FPReg Aw: coInterpreter framePointerAddress.
  			self MoveCw: primitiveRoutine asInteger R: TempReg.
  			self MoveR: TempReg Aw: coInterpreter primitiveFunctionPointerAddress.
  			methodLabel addDependent:
  				(self annotateAbsolutePCRef:
  					(self MoveCw: methodLabel asInteger R: ClassReg)).
  			self MoveMw: (self offset: CogMethod of: #methodObject) r: ClassReg R: TempReg.
  			self MoveR: TempReg Aw: coInterpreter newMethodAddress.
  			self genLoadCStackPointersForPrimCall.
  			backEnd genMarshallNArgs: 0 arg: 0 arg: 0 arg: 0 arg: 0.
  			self CallFullRT: (self cCode: [#checkForAndFollowForwardedPrimitiveState asUnsignedIntegerPtr]
  								   inSmalltalk: [self simulatedTrampolineFor: #checkForAndFollowForwardedPrimitiveState]).
  			backEnd genLoadStackPointersForPrimCall: ClassReg.
  			self CmpCq: 0 R: ABIResultReg.
  			skip := self JumpZero: 0.
  			self MoveCq: 0 R: TempReg.
  			self MoveR: TempReg Aw: coInterpreter primFailCodeAddress.
  			self Jump: retry.
  			skip jmpTarget: self Label]
  		ifFalse: "must reload SPReg to undo any alignment change,"
  			[(flags anyMask: PrimCallOnSmalltalkStackAlign2x) ifTrue:
+ 				[backEnd hasLinkRegister
+ 					ifTrue:
+ 						[self MoveAw: coInterpreter stackPointerAddress R: SPReg]
+ 					ifFalse: "remember to include return address; use scratch to avoid an interrupt overwriting retpc"
+ 						[self MoveAw: coInterpreter stackPointerAddress R: TempReg.
+ 						 self SubCq: objectRepresentation wordSize R: TempReg.
+ 						 self MoveR: TempReg R: SPReg]]].
- 				[backEnd genLoadStackPointersForPrimCall: ClassReg]].
  	"Remember to restore the native stack pointer to point to the C stack,
  	 otherwise the Smalltalk frames will get overwritten on an interrupt."
  	SPReg ~= NativeSPReg ifTrue:
  		[backEnd genLoadCStackPointer].
  	"The LinkRegister now contains the return address either of the primitive call or of checkForAndFollowForwardedPrimitiveState.
  	 It must be restored to the return address of the send invoking this primtiive method."
  	backEnd hasLinkRegister ifTrue:
  		[self MoveR: linkRegSaveRegister R: LinkReg].
  	"Finally remember to reload ReceiverResultReg if required.  Even if
  	 arguments have been pushed, the prolog sequence assumes it is live."
  	(self register: ReceiverResultReg isInMask: ABICallerSavedRegisterMask) ifTrue:
  		[self MoveMw: (methodOrBlockNumArgs + (backEnd hasLinkRegister ifTrue: [0] ifFalse: [1])) * objectMemory wordSize
  			r: SPReg
  			R: ReceiverResultReg].
  	"continue to frame build..."
  	^0!

Item was changed:
  ----- Method: SimpleStackBasedCogit>>compilePrimitive (in category 'primitive generators') -----
  compilePrimitive
  	"Compile a primitive.  If possible, performance-critical primitives will
  	 be generated by their own routines (primitiveGenerator).  Otherwise,
  	 if there is a primitive at all, we call the C routine with the usual
  	 stack-switching dance, test the primFailCode and then either return
  	 on success or continue to the method body."
  	<inline: false>
  	| primitiveDescriptor primitiveRoutine flags |
  	<var: #primitiveDescriptor type: #'PrimitiveDescriptor *'>
  	<var: #primitiveRoutine declareC: 'void (*primitiveRoutine)(void)'>
  	primitiveIndex = 0 ifTrue: [^0].
  	"If a descriptor specifies an argument count (by numArgs >= 0) then it must match
  	 for the generated code to be correct.  For example for speed many primitives use
  	 ResultReceiverReg instead of accessing the stack, so the receiver better be at
  	 numArgs down the stack.  Use the interpreter version if not."
  	((primitiveDescriptor := self primitiveGeneratorOrNil) notNil
  	 and: [primitiveDescriptor primitiveGenerator notNil
  	 and: [(primitiveDescriptor primNumArgs < 0 "means generator doesn't care"
  		   or: [primitiveDescriptor primNumArgs = (coInterpreter argumentCountOf: methodObj)])]]) ifTrue:
  		[| opcodeIndexAtPrimitive code |
  		"Note opcodeIndex so that any arg load instructions
  		 for unimplemented primitives can be discarded."
  		 opcodeIndexAtPrimitive := opcodeIndex.
  		 code := objectRepresentation perform: primitiveDescriptor primitiveGenerator.
  
  		(code < 0 and: [code ~= UnimplementedPrimitive]) ifTrue: "Generator failed, so no point continuing..."
  			[^code].
  		"If the primitive can never fail then there is nothing more that needs to be done."
  		code = UnfailingPrimitive ifTrue:
  			[^0].
  		"If the machine code version handles all cases the only reason to call the interpreter
  		 primitive is to reap the primitive error code.  Don't bother if it isn't used."
  		(code = CompletePrimitive
  		 and: [(self methodUsesPrimitiveErrorCode: methodObj header: methodHeader) not]) ifTrue:
  			[^0].
  		"Discard any arg load code generated by the primitive generator."
  		code = UnimplementedPrimitive ifTrue:
  			[opcodeIndex := opcodeIndexAtPrimitive]].
  
  	primitiveRoutine := coInterpreter
  							functionPointerForCompiledMethod: methodObj
  							primitiveIndex: primitiveIndex
  							primitivePropertyFlagsInto: (self addressOf: flags put: [:val| flags := val]).
  	(flags anyMask: PrimCallDoNotJIT) ifTrue:
  		[^ShouldNotJIT].
  
  	(primitiveRoutine = 0 "no primitive"
  	or: [primitiveRoutine = (self cCoerceSimple: #primitiveFail to: 'void (*)(void)')]) ifTrue:
  		[^self genFastPrimFail].
  
  	(objectRepresentation hasSpurMemoryManagerAPI
  	 and: [flags anyMask: PrimCallOnSmalltalkStack]) ifTrue:
  		[^self compileOnStackExternalPrimitive: primitiveRoutine flags: flags].
- 	minValidCallAddress := minValidCallAddress min: primitiveRoutine asUnsignedInteger.
  	^self compileInterpreterPrimitive: primitiveRoutine flags: flags!

Item was changed:
  ----- Method: StackInterpreter class>>initializePrimitiveTable (in category 'initialization') -----
(excessive size, no diff calculated)

Item was changed:
  ----- Method: StackInterpreter>>printOopShortInner: (in category 'debug printing') -----
  printOopShortInner: oop
  	| classOop name nameLen |
  	<var: #name type: #'char *'>
  	<inline: false>
  	(objectMemory isImmediate: oop) ifTrue:
  		[(objectMemory isImmediateCharacter: oop) ifTrue:
  			[^'$%c(%x)' f: transcript printf: { objectMemory characterValueOf: oop. objectMemory characterValueOf: oop }].
  		 (objectMemory isIntegerObject: oop) ifTrue:
  			[^'%ld(16r%lx)' f: transcript printf: { objectMemory integerValueOf: oop. objectMemory integerValueOf: oop }].
  		 (objectMemory isImmediateFloat: oop) ifTrue:
+ 			[^'%g(16r%lx)' f: transcript printf: {objectMemory dbgFloatValueOf: oop. oop}].
- 			['%g(16r%lx)' f: transcript printf: {objectMemory dbgFloatValueOf: oop. oop}].
  		 ^'unknown immediate 16r%lx' f: transcript printf: oop].
  	(objectMemory addressCouldBeObj: oop) ifFalse:
  		[^self print: ((oop bitAnd: objectMemory allocationUnit - 1) ~= 0
  						ifTrue: [' is misaligned']
  						ifFalse: [self whereIs: oop])].
  	(objectMemory isFreeObject: oop) ifTrue:
  		[^self print: ' is a free chunk'].
  	(objectMemory isForwarded: oop) ifTrue:
  		[| target |
  		 target := objectMemory followForwarded: oop.
  		 ^' is a forwarder to 16r%lx' f: transcript printf: target].
  	(self isFloatObject: oop) ifTrue:
  		[^self printFloat: (objectMemory dbgFloatValueOf: oop)].
  	classOop := objectMemory fetchClassOfNonImm: oop.
  	(objectMemory addressCouldBeObj: classOop) ifFalse:
  		[^self print: 'a ??'].
  	(objectMemory numSlotsOf: classOop) = metaclassNumSlots ifTrue:
  		[^self printNameOfClass: oop count: 5].
  	oop = objectMemory nilObject ifTrue: [^self print: 'nil'].
  	oop = objectMemory trueObject ifTrue: [^self print: 'true'].
  	oop = objectMemory falseObject ifTrue: [^self print: 'false'].
  	nameLen := self lengthOfNameOfClass: classOop.
  	nameLen = 0 ifTrue: [^self print: 'a ??'].
  	name := self nameOfClass: classOop.
  	nameLen = 10 ifTrue:
  		[(self strncmp: name _: 'ByteString' _: 10) = 0 "strncmp is weird" ifTrue:
  			[^self printChar: $'; printStringOf: oop; printChar: $'].
  		 (self strncmp: name _: 'ByteSymbol' _: 10) = 0 "strncmp is weird" ifTrue:
  			[self printChar: $#; printStringOf: oop. ^self]].
  	(nameLen = 9 and: [(self strncmp: name _: 'Character' _: 9) = 0]) ifTrue:
  		[^self printChar: $$; printChar: (objectMemory integerValueOf: (objectMemory fetchPointer: 0 ofObject: oop))].
  	'a(n) %.*s' f: transcript printf: { nameLen. name }.
  	"Try to spot association-like things; they're all subclasses of LookupKey"
  	((objectMemory isPointersNonImm: oop)
  	 and: [(objectMemory instanceSizeOf: classOop) = (ValueIndex + 1)
  	 and: [(objectMemory isBytes: (objectMemory fetchPointer: KeyIndex ofObject: oop))]]) ifTrue:
  		[| classLookupKey |
  		 classLookupKey := objectMemory fetchClassOfNonImm: (objectMemory splObj: SchedulerAssociation).
  		 [classLookupKey = objectMemory nilObject ifTrue:
  			[^self].
  		  (objectMemory instanceSizeOf: classLookupKey) = (KeyIndex + 1)] whileFalse:
  			[classLookupKey := self superclassOf: classLookupKey].
  		 (self includesBehavior: classOop ThatOf: classLookupKey) ifTrue:
  			[self space;
  				printOopShortInner: (objectMemory fetchPointer: KeyIndex ofObject: oop);
  				print: ' -> ';
  				printHexnp: (objectMemory fetchPointer: ValueIndex ofObject: oop)]]!

Item was added:
+ ----- Method: TMethod>>extractSpurPrimitiveSelector (in category 'primitive compilation') -----
+ extractSpurPrimitiveSelector
+ 	"If the receiver is a cliché of the form
+ 			primitiveFoo
+ 				objectMemory hasSpurMemoryManagerAPI
+ 					ifTrue: [self primitiveFooSpur]
+ 					ifFalse: [self primitiveFooV3]
+ 	 then answer the Spur primitive selector, otherwise answer nil."
+ 	"self sn browseAllSelect:
+ 		[:m|
+ 		(m selector beginsWith: #primitive)
+ 		and: [(m sendsSelector: #hasSpurMemoryManagerAPI)
+ 		and: [(m asTranslationMethodOfClass: TMethod) extractSpurPrimitiveSelector notNil]]]"
+ 	| firstStmt |
+ 	((firstStmt := parseTree statements first) isSend
+ 	 and: [firstStmt receiver isSend
+ 	 and: [firstStmt receiver selector == #hasSpurMemoryManagerAPI
+ 	 and: [firstStmt numArgs = 2]]]) ifFalse:
+ 		[^nil].
+ 	firstStmt args
+ 		with: firstStmt selector keywords
+ 		do: [:block :kwd| | send |
+ 			(kwd = #ifTrue:
+ 			 and: [block isStmtList
+ 			 and: [block statements size = 1
+ 			 and: [(send := block statements first) isSend
+ 			 and: [send numArgs = 0 and: [send selector beginsWith: #primitive]]]]]) ifTrue:
+ 				[^send selector]].
+ 	^nil!

Item was changed:
  SharedPool subclass: #VMBytecodeConstants
  	instanceVariableNames: ''
+ 	classVariableNames: 'BytecodeSetHasDirectedSuperSend CtxtTempFrameStart LargeContextBit LargeContextSize LargeContextSlots NewsqueakV4BytecodeSet PrimNumberDoExternalCall PrimNumberDoPrimitive PrimNumberExternalCall PrimNumberFFICall PrimNumberFloatArrayAt PrimNumberFloatArrayAtPut PrimNumberFlushExternalPrimitives PrimNumberInstVarAt PrimNumberShallowCopy PrimNumberSlotAt PrimNumberStringReplace PrimNumberUnloadModule SistaV1BytecodeSet SmallContextSize SmallContextSlots SqueakV3PlusClosuresBytecodeSet'
- 	classVariableNames: 'BytecodeSetHasDirectedSuperSend CtxtTempFrameStart LargeContextBit LargeContextSize LargeContextSlots NewsqueakV4BytecodeSet PrimNumberDoExternalCall PrimNumberDoPrimitive PrimNumberExternalCall PrimNumberFFICall PrimNumberFlushExternalPrimitives PrimNumberInstVarAt PrimNumberShallowCopy PrimNumberSlotAt PrimNumberUnloadModule SistaV1BytecodeSet SmallContextSize SmallContextSlots SqueakV3PlusClosuresBytecodeSet'
  	poolDictionaries: 'VMBasicConstants'
  	category: 'VMMaker-Interpreter'!
  
  !VMBytecodeConstants commentStamp: '<historical>' prior: 0!
  self ensureClassPool.
  #(CtxtTempFrameStart LargeContextBit LargeContextSize SmallContextSize) do:
  	[:k|
  	self classPool declare: k from: ObjectMemory classPool]!




More information about the Vm-dev mailing list