[Vm-dev] VM Maker: VMMaker.oscog-eem.2807.mcz

commits at source.squeak.org commits at source.squeak.org
Tue Sep 15 20:47:24 UTC 2020


Eliot Miranda uploaded a new version of VMMaker to project VM Maker:
http://source.squeak.org/VMMaker/VMMaker.oscog-eem.2807.mcz

==================== Summary ====================

Name: VMMaker.oscog-eem.2807
Author: eem
Time: 15 September 2020, 1:47:10.947101 pm
UUID: 697ee836-b47e-4392-a5e8-3a43b3d5dbf8
Ancestors: VMMaker.oscog-eem.2806

Cogit:
Fix a slip in CogMIPSELCompiler>>computeMaximumSize and nuke an unused concretizer.  Simplify maybeBreakGeneratingFrom:to: senders and save epsilon.

=============== Diff against VMMaker.oscog-eem.2806 ===============

Item was changed:
  ----- Method: CogMIPSELCompiler>>computeMaximumSize (in category 'generate machine code') -----
  computeMaximumSize
  	"Each MIPS instruction has 4 bytes. Many abstract opcodes need more than one
  	 instruction. Instructions that refer to constants and/or literals depend on literals
  	 being stored in-line or out-of-line.
  
  	 N.B.  The ^N forms are to get around the bytecode compiler's long branch
  	 limits which are exceeded when each case jumps around the otherwise."
  
  	opcode
  		caseOf: {
+ 		[BrEqualRR]					-> [^8].
- 		[BrEqualRR]						-> [^8].
  		[BrNotEqualRR]					-> [^8].
  		[BrUnsignedLessRR]			-> [^12].
  		[BrUnsignedLessEqualRR]		-> [^12].
  		[BrUnsignedGreaterRR]			-> [^12].
  		[BrUnsignedGreaterEqualRR]	-> [^12].
  		[BrSignedLessRR]				-> [^12].
  		[BrSignedLessEqualRR]			-> [^12].
  		[BrSignedGreaterRR]			-> [^12].
  		[BrSignedGreaterEqualRR]		-> [^12].
  		[BrLongEqualRR]				-> [^16].
+ 		[BrLongNotEqualRR]			-> [^16].
- 		[BrLongNotEqualRR]				-> [^16].
  		[MulRR]					-> [^4].
  		[DivRR]					-> [^4].
  		[MoveLowR]			-> [^4].
  		[MoveHighR]			-> [^4].
  
  		"Noops & Pseudo Ops"
  		[Label]					-> [^0].
  		[Literal]					-> [^4].
  		[AlignmentNops]		-> [^(operands at: 0) - 4].
  		[Fill32]					-> [^4].
  		[Nop]					-> [^4].
  		"Control"
  		[Call]					-> [^self literalLoadInstructionBytes + 8].
  		[CallFull]				-> [^self literalLoadInstructionBytes + 8].
+ 		[JumpR]				-> [^8].
- 		[JumpR]					-> [^8].
  		[Jump]					-> [^8].
  		[JumpFull]				-> [^self literalLoadInstructionBytes + 8].
+ 		[JumpLong]			-> [^self literalLoadInstructionBytes + 8].
- 		[JumpLong]				-> [^self literalLoadInstructionBytes + 8].
  		[JumpZero]				-> [^8].
  		[JumpNonZero]			-> [^8].
  		[JumpNegative]			-> [^8].
  		[JumpNonNegative]		-> [^8].
  		[JumpOverflow]			-> [^8].
  		[JumpNoOverflow]		-> [^8].
  		[JumpCarry]			-> [^8].
  		[JumpNoCarry]			-> [^8].
  		[JumpLess]				-> [^8].
  		[JumpGreaterOrEqual]	-> [^8].
  		[JumpGreater]			-> [^8].
  		[JumpLessOrEqual]		-> [^8].
  		[JumpBelow]			-> [^8].
  		[JumpAboveOrEqual]	-> [^8].
  		[JumpAbove]			-> [^8].
  		[JumpBelowOrEqual]	-> [^8].
  		[JumpLongZero]		-> [^self literalLoadInstructionBytes + 8].
  		[JumpLongNonZero]	-> [^self literalLoadInstructionBytes + 8].
  		[JumpFPEqual]			-> [^8].
  		[JumpFPNotEqual]		-> [^8].
  		[JumpFPLess]			-> [^8].
  		[JumpFPGreaterOrEqual]-> [^8].
  		[JumpFPGreater]		-> [^8].
  		[JumpFPLessOrEqual]	-> [^8].
  		[JumpFPOrdered]		-> [^8].
+ 		[JumpFPUnordered]	-> [^8].
- 		[JumpFPUnordered]		-> [^8].
  		[RetN]					-> [^8].
  		[Stop]					-> [^4].
  
  		"Arithmetic"
  		[AddCqR]				-> [^(self isShortOffset: (operands at: 0)) ifTrue: [4] ifFalse: [12]].
+ 		[AndCqR]				-> [^((operands at: 0) between: 0 and: 16rFFFF) ifTrue: [4] ifFalse: [12]].
+ 		[AndCqRR]				-> [^((operands at: 0) between: 0 and: 16rFFFF) ifTrue: [4] ifFalse: [12]].
- 		[AndCqR]				-> [^(self isShortOffset: (operands at: 0)) ifTrue: [4] ifFalse: [12]].
- 		[AndCqRR]				-> [^(self isShortOffset: (operands at: 0)) ifTrue: [4] ifFalse: [12]].
  		[OrCqR]				-> [^(self isShortOffset: (operands at: 0)) ifTrue: [4] ifFalse: [12]].
  		[OrCqRR]				-> [^(self isShortOffset: (operands at: 0)) ifTrue: [4] ifFalse: [12]].
  		[CmpCqR]				-> [^28].
  		[SubCqR]				-> [^(self isShortOffset: (operands at: 0)) ifTrue: [4] ifFalse: [12]].
  		[TstCqR]				-> [^(self isShortOffset: (operands at: 0)) ifTrue: [4] ifFalse: [12]].
  		[XorCqR]				-> [^12].
  		[AddCwR]				-> [^12].
  		[AndCwR]				-> [^12].
  		[CmpCwR]				-> [^28].
  		[OrCwR]				-> [^12].
  		[SubCwR]				-> [^12].
  		[XorCwR]				-> [^12].
  		[AddRR]				-> [^4].
  		[AndRR]				-> [^4].
  		[CmpRR]				-> [^20].
  		[OrRR]					-> [^4].
  		[XorRR]					-> [^4].
  		[SubRR]				-> [^4].
  		[NegateR]				-> [^4].
  		[AddRRR]				-> [^4].
  		[SubRRR]				-> [^4].
  		[LoadEffectiveAddressMwrR] -> [^(self isShortOffset: (operands at: 0)) ifTrue: [4] ifFalse: [12]].
  		[LogicalShiftLeftCqR]		-> [^4].
  		[LogicalShiftRightCqR]		-> [^4].
  		[ArithmeticShiftRightCqR]	-> [^4].
  		[LogicalShiftLeftRR]			-> [^4].
  		[LogicalShiftRightRR]		-> [^4].
  		[ArithmeticShiftRightRR]		-> [^4].
  		[AddRdRd]					-> [^4].
  		[CmpRdRd]					-> [^4].
  		[SubRdRd]					-> [^4].
  		[MulRdRd]					-> [^4].
  		[DivRdRd]					-> [^4].
  		[SqrtRd]					-> [^4].
  		[AddCheckOverflowCqR]	-> [^28].
  		[AddCheckOverflowRR]		-> [^20].
  		[SubCheckOverflowCqR]	-> [^28].
  		[SubCheckOverflowRR]		-> [^20].
  		[MulCheckOverflowRR]		-> [^20].
  		[ClzRR]					-> [^4].
  		"Data Movement"						
  		[MoveCqR]				-> [^(self isShortOffset: (operands at: 0)) ifTrue: [4] ifFalse: [8]].
  		[MoveCwR]				-> [^8].
  		[MoveRR]				-> [^4].
  		[MoveRdRd]				-> [^4].
  		[MoveAwR]				-> [^(self isAddressRelativeToVarBase: (operands at: 0))
  													ifTrue: [4]
  													ifFalse: [self literalLoadInstructionBytes + 4]].
  		[MoveRAw]				-> [^(self isAddressRelativeToVarBase: (operands at: 1))
  													ifTrue: [4]
  													ifFalse: [self literalLoadInstructionBytes + 4]].
  		[MoveAbR]				-> [^(self isAddressRelativeToVarBase: (operands at: 0))
  													ifTrue: [4]
  													ifFalse: [self literalLoadInstructionBytes + 4]].
  		[MoveRAb]				-> [^(self isAddressRelativeToVarBase: (operands at: 1))
  													ifTrue: [4]
  													ifFalse: [self literalLoadInstructionBytes + 4]].
  		[MoveRMwr]			-> [^4].
  		[MoveRdM64r]			-> [^self literalLoadInstructionBytes + 4]. 
  		[MoveMbrR]				-> [^4].
  		[MoveRMbr]				-> [^4].
  		[MoveM16rR]			-> [^4].
  		[MoveRM16r]			-> [^4].
  		[MoveM64rRd]			-> [^self literalLoadInstructionBytes + 4].
  		[MoveMwrR]			-> [^(self isShortOffset: (operands at: 0)) ifTrue: [4] ifFalse: [16]].
  		[MoveXbrRR]			-> [^8].
  		[MoveRXbrR]			-> [^8].
  		[MoveXwrRR]			-> [^12].
  		[MoveRXwrR]			-> [^12].
  		[PopR]					-> [^8].
  		[PushR]					-> [^8].
  		[PushCw]				-> [^16].
  		[PushCq]				-> [^16].
  		[PrefetchAw] 			-> [^12].
  		"Conversion"
  		[ConvertRRd]			-> [^8].
  		}.
  	^0 "to keep C compiler quiet"
  !

Item was removed:
- ----- Method: CogMIPSELCompiler>>concretizeAndCqRR (in category 'generate machine code - concretize') -----
- concretizeAndCqRR
- 	| value srcReg dstReg |
- 	value := operands at: 0.
- 	srcReg := operands at: 1.
- 	dstReg := operands at: 2.
- 	self machineCodeAt: 0 put: (self luiR: AT C: (self high16BitsOf: value)).
- 	self machineCodeAt: 4 put: (self oriR: AT R: AT C: (self low16BitsOf: value)).
- 	self machineCodeAt: 8 put: (self andR: dstReg R: srcReg R: AT).
- 	^12!

Item was added:
+ ----- Method: CogMIPSELCompiler>>generalPurposeRegisterMap (in category 'disassembly') -----
+ generalPurposeRegisterMap
+ 	<doNotGenerate>
+ 	"Answer a Dictionary from register getter to register index."
+ 	^Dictionary newFromPairs:
+ 		{	#at. AT.
+ 			#v0. V0.
+ 			#v1. V1.
+ 			#a0. A0.
+ 			#a1. A1.
+ 			#a2. A2.
+ 			#a3. A3.
+ 			#t0. T0.
+ 			#t1. T1.
+ 			#t2. T2.
+ 			#t3. T3.
+ 			#t4. T4.
+ 			#t5. T5.
+ 			#t6. T6.
+ 			#t7. T7.
+ 			#s0. S0.
+ 			#s1. S1.
+ 			#s2. S2.
+ 			#s3. S3.
+ 			#s4. S4.
+ 			#s5. S5.
+ 			#s6. S6.
+ 			#s7. S7.
+ 			#t8. T8.
+ 			#t9. T9.
+ 			#k0. K0.
+ 			#k1. K1.
+ 			#gp. GP	}!

Item was changed:
  ----- Method: Cogit>>cogMNUPICSelector:receiver:methodOperand:numArgs: (in category 'in-line cacheing') -----
  cogMNUPICSelector: selector receiver: rcvr methodOperand: methodOperand numArgs: numArgs
  	<api>
  	"Attempt to create a one-case PIC for an MNU.
  	 The tag for the case is at the send site and so doesn't need to be generated."
  	<returnTypeC: #'CogMethod *'>
  	| startAddress writablePIC actualPIC |
  	((objectMemory isYoung: selector)
  	 or: [(objectRepresentation inlineCacheTagForInstance: rcvr) = self picAbortDiscriminatorValue]) ifTrue:
  		[^0].
  	coInterpreter compilationBreakpoint: selector classTag: (objectMemory fetchClassTagOf: rcvr) isMNUCase: true.
  	self assert: endCPICCase0 notNil.
  
  	"get memory in the code zone for the CPIC; if that fails we return an error code for the sender to use to work out how to blow up"
  	startAddress := methodZone allocate: closedPICSize.
  	startAddress = 0 ifTrue:
  		[coInterpreter callForCogCompiledCodeCompaction.
  		 ^0].
+ 	self maybeBreakGeneratingFrom: startAddress to: startAddress + closedPICSize.
- 	self maybeBreakGeneratingFrom: startAddress to: startAddress + closedPICSize - 1.
  
  	writablePIC := self writableMethodFor: startAddress.
  	"memcpy the prototype across to our allocated space; because anything else would be silly"
  	self codeMemcpy: writablePIC _: cPICPrototype _: closedPICSize.
  
  	self
  		fillInCPICHeader: writablePIC
  		numArgs: numArgs
  		numCases: 1
  		hasMNUCase: true
  		selector: selector.
  
  	self configureMNUCPIC: (actualPIC := self cCoerceSimple: startAddress to: #'CogMethod *')
  		methodOperand: methodOperand
  		numArgs: numArgs
  		delta: startAddress - cPICPrototype asUnsignedInteger.
  
  	"This also implicitly flushes the read/write mapped dual zone to the read/execute zone."
  	backEnd flushICacheFrom: startAddress to: startAddress + closedPICSize.
  
  	self assert: (backEnd callTargetFromReturnAddress: startAddress + missOffset) = (self picAbortTrampolineFor: numArgs).
  	self assertValidDualZoneFrom: startAddress to: startAddress + closedPICSize.
  
  	^actualPIC!

Item was changed:
  ----- Method: Cogit>>cogPICSelector:numArgs:Case0Method:Case1Method:tag:isMNUCase: (in category 'in-line cacheing') -----
  cogPICSelector: selector numArgs: numArgs Case0Method: case0CogMethod Case1Method: case1MethodOrNil tag: case1Tag isMNUCase: isMNUCase
  	"Attempt to create a two-case PIC for case0CogMethod and  case1Method,case1Tag.
  	 The tag for case0CogMethod is at the send site and so doesn't need to be generated.
  	 case1Method may be any of
  		- a Cog method; link to its unchecked entry-point
  		- a CompiledMethod; link to ceInterpretMethodFromPIC:
  		- a CompiledMethod; link to ceMNUFromPICMNUMethod:receiver:"
  	<var: #case0CogMethod type: #'CogMethod *'>
  	<returnTypeC: #'CogMethod *'>
  	| startAddress writablePIC actualPIC |
  	(objectMemory isYoung: selector) ifTrue:
  		[^self cCoerceSimple: YoungSelectorInPIC to: #'CogMethod *'].
  	coInterpreter compilationBreakpoint: selector classTag: case1Tag isMNUCase: isMNUCase.
  	
  	"get memory in the code zone for the CPIC; if that fails we return an error code for the sender to use to work out how to blow up"
  	startAddress := methodZone allocate: closedPICSize.
  	startAddress = 0 ifTrue:
  		[^self cCoerceSimple: InsufficientCodeSpace to: #'CogMethod *'].
+ 	self maybeBreakGeneratingFrom: startAddress to: startAddress + closedPICSize.
- 	self maybeBreakGeneratingFrom: startAddress to: startAddress + closedPICSize - 1.
  
  	writablePIC := self writableMethodFor: startAddress.
  	"memcpy the prototype across to our allocated space; because anything else would be silly"
  	self codeMemcpy: writablePIC _: cPICPrototype _: closedPICSize.
  
  	self
  		fillInCPICHeader: writablePIC
  		numArgs: numArgs
  		numCases: 2
  		hasMNUCase: isMNUCase
  		selector: selector.
  
  	self configureCPIC: (actualPIC := self cCoerceSimple: startAddress to: #'CogMethod *')
  		Case0: case0CogMethod
  		Case1Method: case1MethodOrNil
  		tag: case1Tag
  		isMNUCase: isMNUCase
  		numArgs: numArgs
  		delta: startAddress - cPICPrototype asUnsignedInteger.
  
  	"This also implicitly flushes the read/write mapped dual zone to the read/execute zone."
  	backEnd flushICacheFrom: startAddress to: startAddress + closedPICSize.
  
  	self assert: (backEnd callTargetFromReturnAddress: startAddress + missOffset) = (self picAbortTrampolineFor: numArgs).
  	self assertValidDualZoneFrom: startAddress to: startAddress + closedPICSize.
  
  	^actualPIC!

Item was changed:
  ----- Method: Cogit>>generateInstructionsAt: (in category 'generate machine code') -----
  generateInstructionsAt: eventualAbsoluteAddress
  	"Size pc-dependent instructions and assign eventual addresses to all instructions.
  	 Answer the size of the code.
  	 Compute forward branches based on virtual address (abstract code starts at 0),
  	 assuming that any branches branched over are long.
  	 Compute backward branches based on actual address.
  	 Reuse the fixups array to record the pc-dependent instructions that need to have
  	 their code generation postponed until after the others."
  	| absoluteAddress pcDependentIndex abstractInstruction fixup |
  	<var: #abstractInstruction type: #'AbstractInstruction *'>
  	<var: #fixup type: #'BytecodeFixup *'>
  	absoluteAddress := eventualAbsoluteAddress.
  	pcDependentIndex := 0.
  	0 to: opcodeIndex - 1 do:
  		[:i|
  		abstractInstruction := self abstractInstructionAt: i.
+ 		"N.B. if you want to break in resizing, break here, note the instruction index, back up to the
+ 		 sender, restart, and step into computeMaximumSizes, breaking at this instruction's index."
+ 		self maybeBreakGeneratingFrom: absoluteAddress to: absoluteAddress + abstractInstruction maxSize.
- 		self maybeBreakGeneratingFrom: absoluteAddress to: absoluteAddress + abstractInstruction maxSize - 1.
  		abstractInstruction isPCDependent
  			ifTrue:
  				[abstractInstruction sizePCDependentInstructionAt: absoluteAddress.
  				 fixup := self fixupAtIndex: pcDependentIndex.
  				 pcDependentIndex := pcDependentIndex + 1.
  				 fixup instructionIndex: i.
  				 absoluteAddress := absoluteAddress + abstractInstruction machineCodeSize]
  			ifFalse:
  				[absoluteAddress := abstractInstruction concretizeAt: absoluteAddress]].
  	0 to: pcDependentIndex - 1 do:
  		[:j|
  		fixup := self fixupAtIndex: j.
  		abstractInstruction := self abstractInstructionAt: fixup instructionIndex.
+ 		"N.B. if you want to break in resizing, break here, note the instruction index, back up to the
+ 		 sender, restart, and step into computeMaximumSizes, breaking at this instruction's index."
  		self maybeBreakGeneratingFrom: abstractInstruction address to: abstractInstruction address + abstractInstruction maxSize - 1.
  		abstractInstruction concretizeAt: abstractInstruction address].
  	^absoluteAddress - eventualAbsoluteAddress!

Item was changed:
  ----- Method: Cogit>>maybeBreakGeneratingFrom:to: (in category 'simulation only') -----
  maybeBreakGeneratingFrom: address to: end
  	"Variation on maybeBreakAt: that only works for integer breakPCs,
  	 so we can have break blocks that stop at any pc, except when generating."
  	<cmacro: '(address,end) 0'> "Simulation only; void in C"
  	(breakPC isInteger
+ 	 and: [breakPC >= address
+ 	 and: [breakPC < end
+ 	 and: [breakBlock shouldStopIfAtPC: address]]]) ifTrue:
- 	 and: [(breakPC between: address and: end)
- 	 and: [breakBlock shouldStopIfAtPC: address]]) ifTrue:
  		[coInterpreter changed: #byteCountText.
  		 self halt: 'machine code generation at ', address hex, ' in ', thisContext sender selector]!

Item was changed:
  ----- Method: StackToRegisterMappingCogit>>generateInstructionsAt: (in category 'generate machine code') -----
  generateInstructionsAt: eventualAbsoluteAddress
  	"Size pc-dependent instructions and assign eventual addresses to all instructions.
  	 Answer the size of the code.
  	 Compute forward branches based on virtual address (abstract code starts at 0),
  	 assuming that any branches branched over are long.
  	 Compute backward branches based on actual address.
  	 Reuse the fixups array to record the pc-dependent instructions that need to have
  	 their code generation postponed until after the others.
  
+ 	 Override to add handling for null branches (branches to the immediately following
- 	 Override to andd handling for null branches (branches to the immediately following
  	 instruction) occasioned by StackToRegisterMapping's following of jumps."
  	| absoluteAddress pcDependentIndex abstractInstruction fixup |
  	<var: #abstractInstruction type: #'AbstractInstruction *'>
  	<var: #fixup type: #'BytecodeFixup *'>
  	absoluteAddress := eventualAbsoluteAddress.
  	pcDependentIndex := 0.
  	0 to: opcodeIndex - 1 do:
  		[:i|
  		abstractInstruction := self abstractInstructionAt: i.
+ 		"N.B. if you want to break in resizing, break here, note the instruction index, back up to the
+ 		 sender, restart, and step into computeMaximumSizes, breaking at this instruction's index."
+ 		self maybeBreakGeneratingFrom: absoluteAddress to: absoluteAddress + abstractInstruction maxSize.
- 		self maybeBreakGeneratingFrom: absoluteAddress to: absoluteAddress + abstractInstruction maxSize - 1.
  		abstractInstruction isPCDependent
  			ifTrue:
  				[abstractInstruction sizePCDependentInstructionAt: absoluteAddress.
  				 (abstractInstruction isJump
  				  and: [(i + 1 < opcodeIndex
  				  		and: [abstractInstruction getJmpTarget == (self abstractInstructionAt: i + 1)])
  					or: [i + 2 < opcodeIndex
  						and: [abstractInstruction getJmpTarget == (self abstractInstructionAt: i + 2)
  						and: [(self abstractInstructionAt: i + 1) opcode = Nop]]]])
  					ifTrue:
  						[abstractInstruction
  							opcode: Nop;
  							concretizeAt: absoluteAddress]
  					ifFalse:
  						[fixup := self fixupAtIndex: pcDependentIndex.
  						 pcDependentIndex := pcDependentIndex + 1.
  						 fixup instructionIndex: i].
  				 absoluteAddress := absoluteAddress + abstractInstruction machineCodeSize]
  			ifFalse:
  				[absoluteAddress := abstractInstruction concretizeAt: absoluteAddress.
+ 				"N.B. if you want to break in resizing, break here, note the instruction index, back up to the
+ 				 sender, restart, and step into computeMaximumSizes, breaking at this instruction's index."
  				 self assert: abstractInstruction machineCodeSize = abstractInstruction maxSize]].
  	0 to: pcDependentIndex - 1 do:
  		[:j|
  		fixup := self fixupAtIndex: j.
  		abstractInstruction := self abstractInstructionAt: fixup instructionIndex.
  		self maybeBreakGeneratingFrom: abstractInstruction address to: abstractInstruction address + abstractInstruction maxSize - 1.
  		abstractInstruction concretizeAt: abstractInstruction address].
  	^absoluteAddress - eventualAbsoluteAddress!



More information about the Vm-dev mailing list