[Vm-dev] VM Maker: VMMaker.oscog-rsf.2095.mcz

commits at source.squeak.org commits at source.squeak.org
Tue Jan 17 08:08:41 UTC 2017


Ronie Salgado Faila uploaded a new version of VMMaker to project VM Maker:
http://source.squeak.org/VMMaker/VMMaker.oscog-rsf.2095.mcz

==================== Summary ====================

Name: VMMaker.oscog-rsf.2095
Author: rsf
Time: 17 January 2017, 5:04:54.019809 am
UUID: 0c6b970d-5840-4b69-8c09-986a7941b6f4
Ancestors: VMMaker.oscog-eem.2094

I keep working on the 64 bits version of Lowcode. Now I started writing the actual ABI compiler and I got most of the tests for the UFFI passing (except for a time to time crash on the qsort test), with both the Cogit and the StackInterpreter. I still have to implement and test the splitting of small structures into registers, but it should be easy to do.


=============== Diff against VMMaker.oscog-eem.2094 ===============

Item was changed:
  ----- Method: CogObjectRepresentationFor64BitSpur>>genLcInt32ToOop: (in category 'inline primitive support') -----
  genLcInt32ToOop: value
  	<option: #LowcodeVM>
- 	self genConvertIntegerToSmallIntegerInReg: value.
  	cogit SignExtend32R: value R: value.
+ 	self genConvertIntegerToSmallIntegerInReg: value.
  	cogit ssPushRegister: value.
  	^ 0!

Item was changed:
  ----- Method: CogObjectRepresentationFor64BitSpur>>genLcOopToInt32: (in category 'inline primitive support') -----
  genLcOopToInt32: value
  	<option: #LowcodeVM>
  	self genConvertSmallIntegerToIntegerInReg: value.
+ 	cogit ssPushNativeRegister: value.
- 	cogit ssPushRegister: value.
  	^ 0!

Item was changed:
  ----- Method: CogObjectRepresentationFor64BitSpur>>genLcOopToUInt32: (in category 'inline primitive support') -----
  genLcOopToUInt32: value
  	<option: #LowcodeVM>
  	self genConvertSmallIntegerToIntegerInReg: value.
+ 	cogit ssPushNativeRegister: value.
- 	cogit ssPushRegister: value.
  	^ 0!

Item was changed:
  ----- Method: CogObjectRepresentationFor64BitSpur>>genLcUInt32ToOop: (in category 'inline primitive support') -----
  genLcUInt32ToOop: value
  	<option: #LowcodeVM>
- 	self genConvertIntegerToSmallIntegerInReg: value.
  	cogit ZeroExtend32R: value R: value.
+ 	self genConvertIntegerToSmallIntegerInReg: value.
  	cogit ssPushRegister: value.
  	^ 0!

Item was changed:
  ----- Method: CogSimStackNativeEntry>>registerSecondOrNil (in category 'accessing') -----
  registerSecondOrNil
+ 	^ type = SSRegisterPair ifTrue: [registerSecond]!
- 	^ type SSRegisterPair ifTrue: [registerSecond]!

Item was changed:
  ----- Method: CogX64Compiler>>computeMaximumSize (in category 'generate machine code') -----
(excessive size, no diff calculated)

Item was changed:
  ----- Method: CogX64Compiler>>concretizeCallR (in category 'generate machine code') -----
  concretizeCallR
  	"Will get inlined into concretizeAt: switch."
  	<inline: true>
+ 	| reg skip |
- 	| reg |
  	reg := operands at: 0.
+ 	(reg <= 7)
+ 		ifTrue: [skip := 0]
+ 		ifFalse: [skip := 1. machineCode at: 0 put: (self rexw: false r: 0 x: 0 b: reg). (skip := 1)].
+ 			
  	machineCode
+ 		at: skip + 0 put: 16rFF;
+ 		at: skip + 1 put: (self mod: ModReg RM: reg RO: 2).
+ 	^machineCodeSize := 2!
- 		at: 1 put: (self rexR: 0 x: 0 b: reg);
- 		at: 1 put: 16rFF;
- 		at: 2 put: (self mod: ModReg RM: reg RO: 2).
- 	^machineCodeSize := 3!

Item was changed:
  ----- Method: CogX64Compiler>>concretizeMoveM32rR (in category 'generate machine code') -----
  concretizeMoveM32rR
  	"Will get inlined into concretizeAt: switch."
  	<inline: true>
  	| offset srcReg destReg skip |
  	offset := operands at: 0.
  	srcReg := operands at: 1.
  	destReg := operands at: 2.
  	(srcReg <= 7 and: [destReg <= 7])
  		ifTrue: [skip := 0]
+ 		ifFalse: [skip := 1. machineCode at: 0 put: (self rexw: false r: destReg x: 0 b: srcReg)].
- 		ifFalse: [machineCode at: (skip := 1) put: (self rexw: false r: destReg x: 0 b: srcReg)].
  	machineCode
  		at: skip + 0 put: 16r8b.
  	offset = 0 ifTrue:
  		[(srcReg bitAnd: 6) ~= RSP ifTrue:
  			[machineCode at: skip + 1 put: (self mod: ModRegInd RM: srcReg RO: destReg).
  			 ^machineCodeSize := skip + 2].
  		 (srcReg bitAnd: 7) = RSP ifTrue: "RBP & R13 fall through"
  			[machineCode
  				at: skip + 1 put: (self mod: ModRegInd RM: srcReg RO: destReg);
  				at: skip + 2 put: (self s: SIB1 i: 4 b: srcReg).
  			 ^machineCodeSize := skip + 3]].
  	(self isQuick: offset) ifTrue:
  		[(srcReg bitAnd: 7) ~= RSP ifTrue:
  			[machineCode
  				at: skip + 1 put: (self mod: ModRegRegDisp8 RM: srcReg RO: destReg);
  				at: skip + 2 put: (offset bitAnd: 16rFF).
  			 ^machineCodeSize := skip + 3].
  		 machineCode
  			at: skip + 1 put: (self mod: ModRegRegDisp8 RM: srcReg RO: destReg);
  			at: skip + 2 put: (self s: SIB1 i: 4 b: srcReg);
  			at: skip + 3 put: (offset bitAnd: 16rFF).
  		 ^machineCodeSize := skip + 4].
  	machineCode at: skip + 1 put: (self mod: ModRegRegDisp32 RM: srcReg RO: destReg).
  	(srcReg bitAnd: 7) = RSP ifTrue:
  		[machineCode at: skip + 2 put: (self s: SIB1 i: 4 b: srcReg).
  		 skip := skip + 1].
  	machineCode
  		at: skip + 2 put: (offset bitAnd: 16rFF);
  		at: skip + 3 put: (offset >> 8 bitAnd: 16rFF);
  		at: skip + 4 put: (offset >> 16 bitAnd: 16rFF);
  		at: skip + 5 put: (offset >> 24 bitAnd: 16rFF).
  	^machineCodeSize := skip + 6!

Item was changed:
  ----- Method: CogX64Compiler>>concretizeMoveRM32r (in category 'generate machine code') -----
  concretizeMoveRM32r
  	"Will get inlined into concretizeAt: switch."
  	<inline: true>
  	| offset srcReg destReg skip |
+ 	srcReg := operands at: 0.
+ 	offset := operands at: 1.
- 	offset := operands at: 0.
- 	srcReg := operands at: 1.
  	destReg := operands at: 2.
  	(srcReg <= 7 and: [destReg <= 7])
  		ifTrue: [skip := 0]
+ 		ifFalse: [skip := 1. machineCode at: 0 put: (self rexw: false r: srcReg x: 0 b: destReg)].
- 		ifFalse: [machineCode at: (skip := 1) put: (self rexw: false r: srcReg x: 0 b: destReg)].
  	machineCode
  		at: skip + 0 put: 16r89.
  	offset = 0 ifTrue:
  		[(destReg bitAnd: 6) ~= RSP ifTrue:
  			[machineCode at: skip + 1 put: (self mod: ModRegInd RM: destReg RO: srcReg).
  			 ^machineCodeSize := skip + 2].
  		 (destReg bitAnd: 7) = RSP ifTrue: "RBP & R13 fall through"
  			[machineCode
  				at: skip + 1 put: (self mod: ModRegInd RM: destReg RO: srcReg);
  				at: skip + 2 put: (self s: SIB1 i: 4 b: destReg).
  			 ^machineCodeSize := skip + 3]].
  	(self isQuick: offset) ifTrue:
  		[(destReg bitAnd: 7) ~= RSP ifTrue:
  			[machineCode
  				at: skip + 1 put: (self mod: ModRegRegDisp8 RM: destReg RO: srcReg);
  				at: skip + 2 put: (offset bitAnd: 16rFF).
  			 ^machineCodeSize := skip + 3].
  		 machineCode
  			at: skip + 1 put: (self mod: ModRegRegDisp8 RM: destReg RO: srcReg);
  			at: skip + 2 put: (self s: SIB1 i: 4 b: destReg);
  			at: skip + 3 put: (offset bitAnd: 16rFF).
  		 ^machineCodeSize := skip + 4].
  	machineCode at: skip + 1 put: (self mod: ModRegRegDisp32  RM: destReg RO: srcReg).
  	(destReg bitAnd: 7) = RSP ifTrue:
  		[machineCode at: skip + 2 put: (self s: SIB1 i: 4 b: destReg).
  		 skip := skip + 1].
  	machineCode
  		at: skip + 2 put: (offset bitAnd: 16rFF);
  		at: skip + 3 put: (offset >> 8 bitAnd: 16rFF);
  		at: skip + 4 put: (offset >> 16 bitAnd: 16rFF);
  		at: skip + 5 put: (offset >> 24 bitAnd: 16rFF).
  	^machineCodeSize := skip + 6!

Item was changed:
  ----- Method: CogX64Compiler>>concretizeZeroExtend32RR (in category 'generate machine code') -----
  concretizeZeroExtend32RR
  	"Will get inlined into concretizeAt: switch."
  	"movzxbq"
  	<inline: true>
  	| srcReg destReg skip |
  	srcReg := operands at: 0.
  	destReg := operands at: 1.
  	(srcReg <= 7 and: [destReg <= 7])
  		ifTrue: [skip := 0]
+ 		ifFalse: [skip := 1. machineCode at: 0 put: (self rexw: false r: destReg x: 0 b: srcReg)].
- 		ifFalse: [machineCode at: (skip := 1) put: (self rexw: false r: destReg x: 0 b: srcReg)].
  		
  	machineCode
  		at: skip + 0 put: 16r8b;
  		at: skip + 1 put: (self mod: ModReg RM: srcReg RO: destReg).
  	^ machineCodeSize := skip + 2!

Item was changed:
  ----- Method: CogX64Compiler>>dispatchConcretize (in category 'generate machine code') -----
  dispatchConcretize
  	"Attempt to generate concrete machine code for the instruction at address.
  	 This is the inner dispatch of concretizeAt: actualAddress which exists only
  	 to get around the branch size limits in the SqueakV3 (blue book derived)
  	 bytecode set."
  	<returnTypeC: #void>
  	opcode caseOf: {
  		"Noops & Pseudo Ops"
  		[Label]				-> [^self concretizeLabel].
  		[AlignmentNops]	-> [^self concretizeAlignmentNops].
  		[Fill32]				-> [^self concretizeFill32].
  		[Nop]				-> [^self concretizeNop].
  		"Specific Control/Data Movement"
  		[CDQ]					-> [^self concretizeCDQ].
  		[IDIVR]					-> [^self concretizeIDIVR].
  		[IMULRR]				-> [^self concretizeMulRR].
  		"[CPUID]					-> [^self concretizeCPUID]."
  		"[CMPXCHGAwR]			-> [^self concretizeCMPXCHGAwR]."
  		"[CMPXCHGMwrR]		-> [^self concretizeCMPXCHGMwrR]."
  		"[LFENCE]				-> [^self concretizeFENCE: 5]."
  		"[MFENCE]				-> [^self concretizeFENCE: 6].
  		[SFENCE]				-> [^self concretizeFENCE: 7]."
  		"[LOCK]					-> [^self concretizeLOCK]."
  		"[XCHGAwR]				-> [^self concretizeXCHGAwR]."
  		"[XCHGMwrR]			-> [^self concretizeXCHGMwrR]."
  		[XCHGRR]				-> [^self concretizeXCHGRR].
  		[REP]					-> [^self concretizeREP].
  		[CLD]					-> [^self concretizeCLD].
  		[MOVSB]				-> [^self concretizeMOVSB].
  		[MOVSQ]				-> [^self concretizeMOVSQ].
  		"Control"
  		[Call]					-> [^self concretizeCall].
  		[CallR]					-> [^self concretizeCallR].
  		[CallFull]				-> [^self concretizeCallFull].
  		[JumpR]					-> [^self concretizeJumpR].
  		[JumpFull]				-> [^self concretizeJumpFull].
  		[JumpLong]				-> [^self concretizeJumpLong].
  		[JumpLongZero]		-> [^self concretizeConditionalJump: 16r4].
  		[JumpLongNonZero]	-> [^self concretizeConditionalJump: 16r5].
  		[Jump]					-> [^self concretizeJump].
  		"Table B-1 Intel® 64 and IA-32 Architectures Software Developer's Manual Volume 1: Basic Architecture"
  		[JumpZero]				-> [^self concretizeConditionalJump: 16r4].
  		[JumpNonZero]			-> [^self concretizeConditionalJump: 16r5].
  		[JumpNegative]			-> [^self concretizeConditionalJump: 16r8].
  		[JumpNonNegative]		-> [^self concretizeConditionalJump: 16r9].
  		[JumpOverflow]			-> [^self concretizeConditionalJump: 16r0].
  		[JumpNoOverflow]		-> [^self concretizeConditionalJump: 16r1].
  		[JumpCarry]			-> [^self concretizeConditionalJump: 16r2].
  		[JumpNoCarry]			-> [^self concretizeConditionalJump: 16r3].
  		[JumpLess]				-> [^self concretizeConditionalJump: 16rC].
  		[JumpGreaterOrEqual]	-> [^self concretizeConditionalJump: 16rD].
  		[JumpGreater]			-> [^self concretizeConditionalJump: 16rF].
  		[JumpLessOrEqual]		-> [^self concretizeConditionalJump: 16rE].
  		[JumpBelow]			-> [^self concretizeConditionalJump: 16r2].
  		[JumpAboveOrEqual]	-> [^self concretizeConditionalJump: 16r3].
  		[JumpAbove]			-> [^self concretizeConditionalJump: 16r7].
  		[JumpBelowOrEqual]	-> [^self concretizeConditionalJump: 16r6].
  		[JumpFPEqual]				-> [^self concretizeConditionalJump: 16r4].
  		[JumpFPNotEqual]			-> [^self concretizeConditionalJump: 16r5].
  		[JumpFPLess]				-> [^self concretizeConditionalJump: 16r2].
  		[JumpFPGreaterOrEqual]	-> [^self concretizeConditionalJump: 16r3].
  		[JumpFPGreater]			-> [^self concretizeConditionalJump: 16r7].
  		[JumpFPLessOrEqual]		-> [^self concretizeConditionalJump: 16r6].
  		[JumpFPOrdered]			-> [^self concretizeConditionalJump: 16rB].
  		[JumpFPUnordered]			-> [^self concretizeConditionalJump: 16rA].
  		[RetN]						-> [^self concretizeRetN].
  		[Stop]						-> [^self concretizeStop].
  		"Arithmetic"
  		[AddCqR]					-> [^self concretizeArithCqRWithRO: 0 raxOpcode: 15r05].
  		[AddCwR]					-> [^self concretizeArithCwR: 16r03].
  		[AddRR]						-> [^self concretizeOpRR: 16r03].
  		[AddRsRs]					-> [^self concretizeSEEOpRsRs: 16r58].
  		[AddRdRd]					-> [^self concretizeSEE2OpRdRd: 16r58].
  		[AndCqR]					-> [^self concretizeArithCqRWithRO: 4 raxOpcode: 16r25].
  		[AndCwR]					-> [^self concretizeArithCwR: 16r23].
  		[AndRR]						-> [^self concretizeOpRR: 16r23].
  		[TstCqR]					-> [^self concretizeTstCqR].
  		[CmpCqR]					-> [^self concretizeArithCqRWithRO: 7 raxOpcode: 16r3D].
  		[CmpCwR]					-> [^self concretizeArithCwR: 16r39].
  		[CmpC32R]					-> [^self concretizeCmpC32R].
  		[CmpRR]					-> [^self concretizeReverseOpRR: 16r39].
  		[CmpRdRd]					-> [^self concretizeCmpRdRd].
  		[CmpRsRs]					-> [^self concretizeCmpRsRs].
  		[DivRdRd]					-> [^self concretizeSEE2OpRdRd: 16r5E].
  		[DivRsRs]					-> [^self concretizeSEEOpRsRs: 16r5E].
  		[MulRdRd]					-> [^self concretizeSEE2OpRdRd: 16r59].
  		[MulRsRs]					-> [^self concretizeSEEOpRsRs: 16r59].
  		[OrCqR]						-> [^self concretizeArithCqRWithRO: 1 raxOpcode: 16r0D].
  		[OrCwR]					-> [^self concretizeArithCwR: 16r0B].
  		[OrRR]						-> [^self concretizeOpRR: 16r0B].
  		[SubCqR]					-> [^self concretizeArithCqRWithRO: 5 raxOpcode: 16r2D].
  		[SubCwR]					-> [^self concretizeArithCwR: 16r2B].
  		[SubRR]						-> [^self concretizeOpRR: 16r2B].
  		[SubRdRd]					-> [^self concretizeSEE2OpRdRd: 16r5C].
  		[SubRsRs]					-> [^self concretizeSEEOpRsRs: 16r5C].
  		[SqrtRd]					-> [^self concretizeSqrtRd].
  		[SqrtRs]					-> [^self concretizeSqrtRs].
  		[XorCwR]					-> [^self concretizeArithCwR: 16r33].
  		[XorRR]						-> [^self concretizeOpRR: 16r33].
  		[XorRdRd]						-> [^self concretizeXorRdRd].
  		[XorRsRs]						-> [^self concretizeXorRsRs].
  		[NegateR]					-> [^self concretizeNegateR].
  		[LoadEffectiveAddressMwrR]	-> [^self concretizeLoadEffectiveAddressMwrR].
  		[RotateLeftCqR]				-> [^self concretizeShiftCqRegOpcode: 0].
  		[RotateRightCqR]				-> [^self concretizeShiftCqRegOpcode: 1].
  		[ArithmeticShiftRightCqR]		-> [^self concretizeShiftCqRegOpcode: 7].
  		[LogicalShiftRightCqR]			-> [^self concretizeShiftCqRegOpcode: 5].
  		[LogicalShiftLeftCqR]			-> [^self concretizeShiftCqRegOpcode: 4].
  		[ArithmeticShiftRightRR]			-> [^self concretizeShiftRegRegOpcode: 7].
  		[LogicalShiftLeftRR]				-> [^self concretizeShiftRegRegOpcode: 4].
  		"Data Movement"
  		[MoveCqR]			-> [^self concretizeMoveCqR].
  		[MoveCwR]			-> [^self concretizeMoveCwR].
  		[MoveC32R]		-> [^self concretizeMoveC32R].
  		[MoveRR]			-> [^self concretizeReverseOpRR: 16r89].
  		[MoveAwR]			-> [^self concretizeMoveAwR].
  		[MoveA32R]		-> [^self concretizeMoveA32R].
  		[MoveRAw]			-> [^self concretizeMoveRAw].
  		[MoveRA32]		-> [^self concretizeMoveRA32].
  		[MoveAbR]			-> [^self concretizeMoveAbR].
  		[MoveRAb]			-> [^self concretizeMoveRAb].
  		[MoveMbrR]			-> [^self concretizeMoveMbrR].
  		[MoveRMbr]			-> [^self concretizeMoveRMbr].
+ 		[MoveM8rR]		-> [^self concretizeMoveMbrR].
+ 		[MoveRM8r]		-> [^self concretizeMoveRMbr].
  		[MoveM16rR]		-> [^self concretizeMoveM16rR].
  		[MoveRM16r]		-> [^self concretizeMoveRM16r].
  		[MoveM32rR]		-> [^self concretizeMoveM32rR].
  		[MoveM32rRs]		-> [^self concretizeMoveM32rRs].
  		[MoveM64rRd]		-> [^self concretizeMoveM64rRd].
  		[MoveMwrR]		-> [^self concretizeMoveMwrR].
  		[MoveXbrRR]		-> [^self concretizeMoveXbrRR].
  		[MoveRXbrR]		-> [^self concretizeMoveRXbrR].
  		[MoveXwrRR]		-> [^self concretizeMoveXwrRR].
  		[MoveRXwrR]		-> [^self concretizeMoveRXwrR].
  		[MoveX32rRR]		-> [^self concretizeMoveX32rRR].
  		[MoveRX32rR]		-> [^self concretizeMoveRX32rR].
  		[MoveRMwr]		-> [^self concretizeMoveRMwr].
  		[MoveRM32r]		-> [^self concretizeMoveRM32r].
  		[MoveRsM32r]		-> [^self concretizeMoveRsM32r].
  		[MoveRdM64r]		-> [^self concretizeMoveRdM64r].
  		[MoveRdR]			-> [^self concretizeMoveRdR].
  		[MoveRRd]			-> [^self concretizeMoveRRd].
  		[MoveRdRd]		-> [^self concretizeMoveRdRd].
  		[MoveRsRs]		-> [^self concretizeMoveRsRs].
  		[PopR]				-> [^self concretizePopR].
  		[PushR]				-> [^self concretizePushR].
  		[PushCq]			-> [^self concretizePushCq].
  		[PushCw]			-> [^self concretizePushCw].
  		[PrefetchAw]		-> [^self concretizePrefetchAw].
  		"Conversion"
  		[ConvertRRd]		-> [^self concretizeConvertRRd].
  		[ConvertRdR]		-> [^self concretizeConvertRdR].
  		[ConvertRRs]		-> [^self concretizeConvertRRs].
  		[ConvertRsR]		-> [^self concretizeConvertRsR].
  		[ConvertRsRd]	-> [^self concretizeConvertRsRd].
  		[ConvertRdRs]	-> [^self concretizeConvertRdRs].
  			
  		[SignExtend8RR]		-> [^self concretizeSignExtend8RR].
  		[SignExtend16RR]	-> [^self concretizeSignExtend16RR].
  		[SignExtend32RR]	-> [^self concretizeSignExtend32RR].
  		
  		[ZeroExtend8RR]		-> [^self concretizeZeroExtend8RR].
  		[ZeroExtend16RR]	-> [^self concretizeZeroExtend16RR].
  		[ZeroExtend32RR]	-> [^self concretizeZeroExtend32RR].
  		}!

Item was changed:
  ----- Method: StackInterpreter>>lowcodePrimitiveLockRegisters (in category 'inline primitive generated code') -----
  lowcodePrimitiveLockRegisters
  	<option: #LowcodeVM>	"Lowcode instruction generator"
  
+ 	"Nop in the interpreter"
- 	self abort.
  
  
  !

Item was changed:
  ----- Method: StackInterpreter>>lowcodePrimitiveUnlockRegisters (in category 'inline primitive generated code') -----
  lowcodePrimitiveUnlockRegisters
  	<option: #LowcodeVM>	"Lowcode instruction generator"
  
+ 	"Nop in the interpreter"
- 	self abort.
  
  
  !

Item was changed:
  ----- Method: StackToRegisterMappingCogit>>callSwitchToSmalltalkStack (in category 'inline ffi') -----
  callSwitchToSmalltalkStack
  	<option: #LowcodeVM>
  	"Restore the link register"
+ 	backEnd hasVarBaseRegister ifTrue:
+ 		[self MoveCq: self varBaseAddress R: VarBaseReg].
  	backEnd hasLinkRegister ifTrue: [
  		self MoveAw: coInterpreter instructionPointerAddress R: LinkReg
  	].
  	backEnd genLoadStackPointers.!

Item was changed:
  ----- Method: StackToRegisterMappingCogit>>eventualTargetOf: (in category 'peephole optimizations') -----
  eventualTargetOf: targetBytecodePC
  	"Attempt to follow a branch to a pc.  Handle branches to unconditional jumps
  	 and branches to push: aBoolean; conditional branch pairs.  If the branch cannot
  	 be followed answer targetBytecodePC."
  
  	| currentTarget nextPC nExts descriptor span cond |
  	<var: #descriptor type: #'BytecodeDescriptor *'>
  	nextPC := currentTarget := targetBytecodePC.
  	[[nExts := 0.
  	  descriptor := self generatorAt: bytecodeSetOffset
  								+ (objectMemory fetchByte: nextPC ofObject: methodObj).
  	  descriptor isReturn ifTrue: [^currentTarget]. "avoid stepping off the end of methods"
  	  descriptor isExtension]
  		whileTrue:
  			[nExts := nExts + 1.
  			 nextPC := nextPC + descriptor numBytes].
  	 descriptor isUnconditionalBranch
  		ifTrue:
  			[span := self spanFor: descriptor at: nextPC exts: nExts in: methodObj.
  			 span < 0 ifTrue: "Do *not* follow backward branches; these are interrupt points and should not be elided."
  				[^currentTarget].
  			 nextPC := nextPC + descriptor numBytes + span]
  		ifFalse:
+ 			[descriptor generator == #genPushConstantTrueBytecode ifTrue: [ cond := true ]
+ 			 ifFalse: [ descriptor generator == #genPushConstantFalseBytecode ifTrue: [ cond := false ] 							ifFalse: [ ^currentTarget ] ].
- 			[descriptor generator
- 				caseOf: {
- 				[#genPushConstantTrueBytecode] -> [cond := true].
- 				[#genPushConstantFalseBytecode] -> [cond := false] }
- 				otherwise: [^currentTarget].
  			 "Don't step into loops across a pushTrue; jump:if: boundary, so as not to confuse stack depth fixup."
  			 (fixups at: nextPC - initialPC) isBackwardBranchFixup ifTrue:
  				[^currentTarget].
  			 nextPC := self eventualTargetOf: nextPC + descriptor numBytes.
  			 nExts := 0.
  			 [descriptor := self generatorAt: bytecodeSetOffset
  								+ (objectMemory fetchByte: nextPC ofObject: methodObj).
  			  descriptor isReturn ifTrue: [^currentTarget]. "avoid stepping off the end of methods"
  			  descriptor isExtension]
  				whileTrue:
  					[nExts := nExts + 1.
  					 nextPC := nextPC + descriptor numBytes].
  			 descriptor isBranch ifFalse:
  				[^currentTarget].
  			 descriptor isUnconditionalBranch ifTrue:
  				[^currentTarget].
  			 nextPC := cond == descriptor isBranchTrue
  									ifTrue: [nextPC
  											+ descriptor numBytes
  											+ (self spanFor: descriptor at: nextPC exts: nExts in: methodObj)]
  									ifFalse: [nextPC + descriptor numBytes]].
  	 currentTarget := nextPC]
  		repeat!

Item was changed:
  ----- Method: StackToRegisterMappingCogit>>genLowcodeMoveFloat32ToPhysical (in category 'inline primitive generators generated code') -----
  genLowcodeMoveFloat32ToPhysical
  	<option: #LowcodeVM>	"Lowcode instruction generator"
- 	| registerID value |
- 	registerID := extA.
  
+ 	self ssNativeTop nativeStackPopToReg: extA.
- 	(value := backEnd availableFloatRegisterOrNoneFor: self liveFloatRegisters) = NoReg ifTrue:
- 		[self ssAllocateRequiredFloatReg: (value := DPFPReg0)].
- 	self ssNativeTop nativePopToReg: value.
  	self ssNativePop: 1.
+ 	currentCallCleanUpSize := currentCallCleanUpSize + BytesPerWord.
- 
- 	self abort.
- 
  	extA := 0.
+ 
  	^ 0
  
  !

Item was changed:
  ----- Method: StackToRegisterMappingCogit>>genLowcodeMoveFloat64ToPhysical (in category 'inline primitive generators generated code') -----
  genLowcodeMoveFloat64ToPhysical
  	<option: #LowcodeVM>	"Lowcode instruction generator"
- 	| registerID value |
- 	registerID := extA.
  
+ 	self ssNativeTop nativeStackPopToReg: extA.
- 	(value := backEnd availableFloatRegisterOrNoneFor: self liveFloatRegisters) = NoReg ifTrue:
- 		[self ssAllocateRequiredFloatReg: (value := DPFPReg0)].
- 	self ssNativeTop nativePopToReg: value.
  	self ssNativePop: 1.
+ 	currentCallCleanUpSize := currentCallCleanUpSize + 8.
- 
- 	self abort.
- 
  	extA := 0.
+ 
  	^ 0
  
  !

Item was changed:
  ----- Method: StackToRegisterMappingCogit>>genLowcodeMoveInt32ToPhysical (in category 'inline primitive generators generated code') -----
  genLowcodeMoveInt32ToPhysical
  	<option: #LowcodeVM>	"Lowcode instruction generator"
- 	| registerID value |
- 	registerID := extA.
  
+ 	self ssNativeTop nativeStackPopToReg: extA.
- 	(value := backEnd availableRegisterOrNoneFor: self liveRegisters) = NoReg ifTrue:
- 		[self ssAllocateRequiredReg:
- 			(value := optStatus isReceiverResultRegLive
- 				ifTrue: [Arg0Reg]
- 				ifFalse: [ReceiverResultReg])].
- 	value = ReceiverResultReg ifTrue:
- 		[ optStatus isReceiverResultRegLive: false ].
- 	self ssNativeTop nativePopToReg: value.
  	self ssNativePop: 1.
+ 	currentCallCleanUpSize := currentCallCleanUpSize + BytesPerWord.
- 
- 	self abort.
- 
  	extA := 0.
+ 
  	^ 0
  
  !

Item was changed:
  ----- Method: StackToRegisterMappingCogit>>genLowcodeMoveInt64ToPhysical (in category 'inline primitive generators generated code') -----
  genLowcodeMoveInt64ToPhysical
  	<option: #LowcodeVM>	"Lowcode instruction generator"
- 	| registerID valueHigh value valueLow |
- 	registerID := extA.
  
+ 	self ssNativeTop nativeStackPopToReg: extA.
- 	(value := backEnd availableRegisterOrNoneFor: self liveRegisters) = NoReg ifTrue:
- 		[self ssAllocateRequiredReg:
- 			(value := optStatus isReceiverResultRegLive
- 				ifTrue: [Arg0Reg]
- 				ifFalse: [ReceiverResultReg])].
- 	value = ReceiverResultReg ifTrue:
- 		[ optStatus isReceiverResultRegLive: false ].
- 	self ssNativeTop nativePopToReg: value.
  	self ssNativePop: 1.
+ 	currentCallCleanUpSize := currentCallCleanUpSize + 8.
- 
- 	self abort.
- 
  	extA := 0.
+ 
  	^ 0
  
  !

Item was changed:
  ----- Method: StackToRegisterMappingCogit>>genLowcodeMovePointerToPhysical (in category 'inline primitive generators generated code') -----
  genLowcodeMovePointerToPhysical
  	<option: #LowcodeVM>	"Lowcode instruction generator"
- 	| registerID pointerValue |
- 	registerID := extA.
  
+ 	self ssNativeTop nativeStackPopToReg: extA.
- 	(pointerValue := backEnd availableRegisterOrNoneFor: self liveRegisters) = NoReg ifTrue:
- 		[self ssAllocateRequiredReg:
- 			(pointerValue := optStatus isReceiverResultRegLive
- 				ifTrue: [Arg0Reg]
- 				ifFalse: [ReceiverResultReg])].
- 	pointerValue = ReceiverResultReg ifTrue:
- 		[ optStatus isReceiverResultRegLive: false ].
- 	self ssNativeTop nativePopToReg: pointerValue.
  	self ssNativePop: 1.
+ 	currentCallCleanUpSize := currentCallCleanUpSize + BytesPerWord.
- 
- 	self abort.
- 
  	extA := 0.
+ 
  	^ 0
  
  !



More information about the Vm-dev mailing list