[Vm-dev] VM Maker: VMMaker.oscog-tpr.1655.mcz
commits at source.squeak.org
commits at source.squeak.org
Tue Jan 19 21:37:38 UTC 2016
tim Rowledge uploaded a new version of VMMaker to project VM Maker:
http://source.squeak.org/VMMaker/VMMaker.oscog-tpr.1655.mcz
==================== Summary ====================
Name: VMMaker.oscog-tpr.1655
Author: tpr
Time: 19 January 2016, 1:36:16.63 pm
UUID: 3085df01-1695-4f2d-8c08-92fd1b1e3df1
Ancestors: VMMaker.oscog-cb.1654
merge in ARM 16 bit write handling. Appears to simulate ok.
=============== Diff against VMMaker.oscog-cb.1654 ===============
Item was changed:
----- Method: CogARMCompiler>>computeMaximumSize (in category 'generate machine code') -----
computeMaximumSize
"Because we don't use Thumb, each ARM instruction has 4 bytes. Many
abstract opcodes need more than one instruction. Instructions that refer
to constants and/or literals depend on literals being stored in-line or out-of-line.
N.B. The ^N forms are to get around the bytecode compiler's long branch
limits which are exceeded when each case jumps around the otherwise."
opcode
caseOf: {
"Noops & Pseudo Ops"
[Label] -> [^0].
[Literal] -> [^4].
[AlignmentNops] -> [^(operands at: 0) - 4].
[Fill32] -> [^4].
[Nop] -> [^4].
"Control"
[Call] -> [^4].
[CallFull] -> [^self literalLoadInstructionBytes + 4].
[JumpR] -> [^4].
[Jump] -> [^4].
[JumpFull] -> [^self literalLoadInstructionBytes + 4].
[JumpLong] -> [^4].
[JumpZero] -> [^4].
[JumpNonZero] -> [^4].
[JumpNegative] -> [^4].
[JumpNonNegative] -> [^4].
[JumpOverflow] -> [^4].
[JumpNoOverflow] -> [^4].
[JumpCarry] -> [^4].
[JumpNoCarry] -> [^4].
[JumpLess] -> [^4].
[JumpGreaterOrEqual] -> [^4].
[JumpGreater] -> [^4].
[JumpLessOrEqual] -> [^4].
[JumpBelow] -> [^4].
[JumpAboveOrEqual] -> [^4].
[JumpAbove] -> [^4].
[JumpBelowOrEqual] -> [^4].
[JumpLongZero] -> [^4].
[JumpLongNonZero] -> [^4].
[JumpFPEqual] -> [^8].
[JumpFPNotEqual] -> [^8].
[JumpFPLess] -> [^8].
[JumpFPGreaterOrEqual]-> [^8].
[JumpFPGreater] -> [^8].
[JumpFPLessOrEqual] -> [^8].
[JumpFPOrdered] -> [^8].
[JumpFPUnordered] -> [^8].
[RetN] -> [^(operands at: 0) = 0 ifTrue: [4] ifFalse: [8]].
[Stop] -> [^4].
"Arithmetic"
[AddCqR] -> [^self rotateable8bitSignedImmediate: (operands at: 0)
ifTrue: [:r :i :n| 4]
ifFalse: [self literalLoadInstructionBytes + 4]].
[AndCqR] -> [^self rotateable8bitBitwiseImmediate: (operands at: 0)
ifTrue: [:r :i :n| 4]
ifFalse:
[self literalLoadInstructionBytes = 4
ifTrue: [8]
ifFalse:
[1 << (operands at: 0) highBit = ((operands at: 0) + 1)
ifTrue: [8]
ifFalse: [self literalLoadInstructionBytes + 4]]]].
[AndCqRR] -> [^self rotateable8bitBitwiseImmediate: (operands at: 0)
ifTrue: [:r :i :n| 4]
ifFalse:
[self literalLoadInstructionBytes = 4
ifTrue: [8]
ifFalse:
[1 << (operands at: 0) highBit = ((operands at: 0) + 1)
ifTrue: [8]
ifFalse: [self literalLoadInstructionBytes + 4]]]].
[CmpCqR] -> [^self rotateable8bitSignedImmediate: (operands at: 0)
ifTrue: [:r :i :n| 4]
ifFalse: [self literalLoadInstructionBytes + 4]].
[OrCqR] -> [^self rotateable8bitImmediate: (operands at: 0)
ifTrue: [:r :i| 4]
ifFalse: [self literalLoadInstructionBytes + 4]].
[SubCqR] -> [^self rotateable8bitSignedImmediate: (operands at: 0)
ifTrue: [:r :i :n| 4]
ifFalse: [self literalLoadInstructionBytes + 4]].
[TstCqR] -> [^self rotateable8bitImmediate: (operands at: 0)
ifTrue: [:r :i| 4]
ifFalse: [self literalLoadInstructionBytes + 4]].
[XorCqR] -> [^self rotateable8bitBitwiseImmediate: (operands at: 0)
ifTrue: [:r :i :n| 4]
ifFalse:
[self literalLoadInstructionBytes = 4
ifTrue: [8]
ifFalse:
[1 << (operands at: 0) highBit = ((operands at: 0) + 1)
ifTrue: [8]
ifFalse: [self literalLoadInstructionBytes + 4]]]].
[AddCwR] -> [^self literalLoadInstructionBytes + 4].
[AndCwR] -> [^self literalLoadInstructionBytes + 4].
[CmpCwR] -> [^self literalLoadInstructionBytes + 4].
[OrCwR] -> [^self literalLoadInstructionBytes + 4].
[SubCwR] -> [^self literalLoadInstructionBytes + 4].
[XorCwR] -> [^self literalLoadInstructionBytes + 4].
[AddRR] -> [^4].
[AndRR] -> [^4].
[CmpRR] -> [^4].
[OrRR] -> [^4].
[XorRR] -> [^4].
[SubRR] -> [^4].
[NegateR] -> [^4].
[LoadEffectiveAddressMwrR]
-> [^self rotateable8bitImmediate: (operands at: 0)
ifTrue: [:r :i| 4]
ifFalse: [self literalLoadInstructionBytes + 4]].
[LogicalShiftLeftCqR] -> [^4].
[LogicalShiftRightCqR] -> [^4].
[ArithmeticShiftRightCqR] -> [^4].
[LogicalShiftLeftRR] -> [^4].
[LogicalShiftRightRR] -> [^4].
[ArithmeticShiftRightRR] -> [^4].
[AddRdRd] -> [^4].
[CmpRdRd] -> [^4].
[SubRdRd] -> [^4].
[MulRdRd] -> [^4].
[DivRdRd] -> [^4].
[SqrtRd] -> [^4].
"ARM Specific Arithmetic"
[SMULL] -> [^4].
[MSR] -> [^4].
[CMPSMULL] -> [^4]. "special compare for genMulR:R: usage"
"Data Movement"
[MoveCqR] -> [^self literalLoadInstructionBytes = 4
ifTrue: [self literalLoadInstructionBytes]
ifFalse:
[self rotateable8bitBitwiseImmediate: (operands at: 0)
ifTrue: [:r :i :n| 4]
ifFalse: [self literalLoadInstructionBytes]]].
[MoveCwR] -> [^self literalLoadInstructionBytes = 4
ifTrue: [self literalLoadInstructionBytes]
ifFalse:
[(self inCurrentCompilation: (operands at: 0))
ifTrue: [4]
ifFalse: [self literalLoadInstructionBytes]]].
[MoveRR] -> [^4].
[MoveRdRd] -> [^4].
[MoveAwR] -> [^(self isAddressRelativeToVarBase: (operands at: 0))
ifTrue: [4]
ifFalse: [self literalLoadInstructionBytes + 4]].
[MoveRAw] -> [^(self isAddressRelativeToVarBase: (operands at: 1))
ifTrue: [4]
ifFalse: [self literalLoadInstructionBytes + 4]].
[MoveAbR] -> [^(self isAddressRelativeToVarBase: (operands at: 0))
ifTrue: [4]
ifFalse: [self literalLoadInstructionBytes + 4]].
[MoveRAb] -> [^(self isAddressRelativeToVarBase: (operands at: 1))
ifTrue: [4]
ifFalse: [self literalLoadInstructionBytes + 4]].
[MoveRMwr] -> [^self is12BitValue: (operands at: 1)
ifTrue: [:u :i| 4]
ifFalse: [self literalLoadInstructionBytes + 4]].
[MoveRdM64r] -> [^self literalLoadInstructionBytes + 4].
[MoveMbrR] -> [^self is12BitValue: (operands at: 0)
ifTrue: [:u :i| 4]
ifFalse: [self literalLoadInstructionBytes + 4]].
[MoveRMbr] -> [^self is12BitValue: (operands at: 1)
ifTrue: [:u :i| 4]
ifFalse: [self literalLoadInstructionBytes + 4]].
+ [MoveRM16r] -> [^self is12BitValue: (operands at: 1)
+ ifTrue: [:u :i| 4]
+ ifFalse: [self literalLoadInstructionBytes + 4]].
[MoveM16rR] -> [^self rotateable8bitImmediate: (operands at: 0)
ifTrue: [:r :i| 4]
ifFalse: [self literalLoadInstructionBytes + 4]].
[MoveM64rRd] -> [^self literalLoadInstructionBytes + 4].
[MoveMwrR] -> [^self is12BitValue: (operands at: 0)
ifTrue: [:u :i| 4]
ifFalse: [self literalLoadInstructionBytes + 4]].
[MoveXbrRR] -> [^4].
[MoveRXbrR] -> [^4].
[MoveXwrRR] -> [^4].
[MoveRXwrR] -> [^4].
[PopR] -> [^4].
[PushR] -> [^4].
[PushCw] -> [^self literalLoadInstructionBytes = 4
ifTrue: [self literalLoadInstructionBytes + 4]
ifFalse:
[(self inCurrentCompilation: (operands at: 0))
ifTrue: [8]
ifFalse:
[self rotateable8bitBitwiseImmediate: (operands at: 0)
ifTrue: [:r :i :n| 8]
ifFalse: [self literalLoadInstructionBytes + 4]]]].
[PushCq] -> [^self literalLoadInstructionBytes = 4
ifTrue: [self literalLoadInstructionBytes + 4]
ifFalse:
[self rotateable8bitBitwiseImmediate: (operands at: 0)
ifTrue: [:r :i :n| 8]
ifFalse: [self literalLoadInstructionBytes + 4]]].
[PrefetchAw] -> [^(self isAddressRelativeToVarBase: (operands at: 0))
ifTrue: [4]
ifFalse: [self literalLoadInstructionBytes + 4]].
"Conversion"
[ConvertRRd] -> [^8].
}.
^0 "to keep C compiler quiet"
!
Item was added:
+ ----- Method: CogARMCompiler>>concretizeMoveRM16r (in category 'generate machine code - concretize') -----
+ concretizeMoveRM16r
+ "Will get inlined into concretizeAt: switch."
+ <var: #offset type: #sqInt>
+ <inline: true>
+ | srcReg offset baseReg instrOffset|
+ srcReg := operands at: 0.
+ offset := operands at: 1.
+ baseReg := operands at: 2.
+ self is12BitValue: offset
+ ifTrue:
+ [ :u :immediate |
+ self machineCodeAt: 0 "strh srcReg, [baseReg, #immediate]"
+ put: (self strh: srcReg rn: baseReg plus: u imm: immediate).
+ ^machineCodeSize := 4]
+ ifFalse:
+ [(self isAddressRelativeToVarBase: offset)
+ ifTrue:
+ [self machineCodeAt: 0 put: (self adds: ConcreteIPReg rn: ConcreteVarBaseReg imm: offset - cogit varBaseAddress ror: 0).
+ instrOffset := 4]
+ ifFalse:
+ [instrOffset := self moveCw: offset intoR: ConcreteIPReg].
+ "strb srcReg, [baseReg, ConcreteIPReg]"
+ self machineCodeAt: instrOffset put: (self strh: srcReg rn: baseReg rm: ConcreteIPReg).
+ ^machineCodeSize := instrOffset + 4 ].
+ ^0 "to keep Slang happy"!
Item was changed:
----- Method: CogARMCompiler>>dispatchConcretize (in category 'generate machine code') -----
dispatchConcretize
"Attempt to generate concrete machine code for the instruction at address.
This is the inner dispatch of concretizeAt: actualAddress which exists only
to get around the branch size limits in the SqueakV3 (blue book derived)
bytecode set."
<returnTypeC: #void>
conditionOrNil ifNotNil:
[self concretizeConditionalInstruction.
^self].
opcode caseOf: {
"Noops & Pseudo Ops"
[Label] -> [^self concretizeLabel].
[Literal] -> [^self concretizeLiteral].
[AlignmentNops] -> [^self concretizeAlignmentNops].
[Fill32] -> [^self concretizeFill32].
[Nop] -> [^self concretizeNop].
"Control"
[Call] -> [^self concretizeCall]. "call code within code space"
[CallFull] -> [^self concretizeCallFull]. "call code anywhere in address space"
[JumpR] -> [^self concretizeJumpR].
[JumpFull] -> [^self concretizeJumpFull]."jump within address space"
[JumpLong] -> [^self concretizeConditionalJump: AL]."jumps witihn code space"
[JumpLongZero] -> [^self concretizeConditionalJump: EQ].
[JumpLongNonZero] -> [^self concretizeConditionalJump: NE].
[Jump] -> [^self concretizeConditionalJump: AL].
[JumpZero] -> [^self concretizeConditionalJump: EQ].
[JumpNonZero] -> [^self concretizeConditionalJump: NE].
[JumpNegative] -> [^self concretizeConditionalJump: MI].
[JumpNonNegative] -> [^self concretizeConditionalJump: PL].
[JumpOverflow] -> [^self concretizeConditionalJump: VS].
[JumpNoOverflow] -> [^self concretizeConditionalJump: VC].
[JumpCarry] -> [^self concretizeConditionalJump: CS].
[JumpNoCarry] -> [^self concretizeConditionalJump: CC].
[JumpLess] -> [^self concretizeConditionalJump: LT].
[JumpGreaterOrEqual] -> [^self concretizeConditionalJump: GE].
[JumpGreater] -> [^self concretizeConditionalJump: GT].
[JumpLessOrEqual] -> [^self concretizeConditionalJump: LE].
[JumpBelow] -> [^self concretizeConditionalJump: CC]. "unsigned lower"
[JumpAboveOrEqual] -> [^self concretizeConditionalJump: CS]. "unsigned greater or equal"
[JumpAbove] -> [^self concretizeConditionalJump: HI].
[JumpBelowOrEqual] -> [^self concretizeConditionalJump: LS].
[JumpFPEqual] -> [^self concretizeFPConditionalJump: EQ].
[JumpFPNotEqual] -> [^self concretizeFPConditionalJump: NE].
[JumpFPLess] -> [^self concretizeFPConditionalJump: LT].
[JumpFPGreaterOrEqual] -> [^self concretizeFPConditionalJump: GE].
[JumpFPGreater] -> [^self concretizeFPConditionalJump: GT].
[JumpFPLessOrEqual] -> [^self concretizeFPConditionalJump: LE].
[JumpFPOrdered] -> [^self concretizeFPConditionalJump: VC].
[JumpFPUnordered] -> [^self concretizeFPConditionalJump: VS].
[RetN] -> [^self concretizeRetN].
[Stop] -> [^self concretizeStop].
"Arithmetic"
[AddCqR] -> [^self concretizeNegateableDataOperationCqR: AddOpcode].
[AndCqR] -> [^self concretizeInvertibleDataOperationCqR: AndOpcode].
[AndCqRR] -> [^self concretizeAndCqRR].
[CmpCqR] -> [^self concretizeNegateableDataOperationCqR: CmpOpcode].
[OrCqR] -> [^self concretizeDataOperationCqR: OrOpcode].
[SubCqR] -> [^self concretizeSubCqR].
[TstCqR] -> [^self concretizeTstCqR].
[XorCqR] -> [^self concretizeInvertibleDataOperationCqR: XorOpcode].
[AddCwR] -> [^self concretizeDataOperationCwR: AddOpcode].
[AndCwR] -> [^self concretizeDataOperationCwR: AndOpcode].
[CmpCwR] -> [^self concretizeDataOperationCwR: CmpOpcode].
[OrCwR] -> [^self concretizeDataOperationCwR: OrOpcode].
[SubCwR] -> [^self concretizeDataOperationCwR: SubOpcode].
[XorCwR] -> [^self concretizeDataOperationCwR: XorOpcode].
[AddRR] -> [^self concretizeDataOperationRR: AddOpcode].
[AndRR] -> [^self concretizeDataOperationRR: AndOpcode].
[CmpRR] -> [^self concretizeDataOperationRR: CmpOpcode].
[OrRR] -> [^self concretizeDataOperationRR: OrOpcode].
[SubRR] -> [^self concretizeDataOperationRR: SubOpcode].
[XorRR] -> [^self concretizeDataOperationRR: XorOpcode].
[AddRdRd] -> [^self concretizeAddRdRd].
[CmpRdRd] -> [^self concretizeCmpRdRd].
[DivRdRd] -> [^self concretizeDivRdRd].
[MulRdRd] -> [^self concretizeMulRdRd].
[SubRdRd] -> [^self concretizeSubRdRd].
[SqrtRd] -> [^self concretizeSqrtRd].
[NegateR] -> [^self concretizeNegateR].
[LoadEffectiveAddressMwrR] -> [^self concretizeLoadEffectiveAddressMwrR].
[ArithmeticShiftRightCqR] -> [^self concretizeArithmeticShiftRightCqR].
[LogicalShiftRightCqR] -> [^self concretizeLogicalShiftRightCqR].
[LogicalShiftLeftCqR] -> [^self concretizeLogicalShiftLeftCqR].
[ArithmeticShiftRightRR] -> [^self concretizeArithmeticShiftRightRR].
[LogicalShiftLeftRR] -> [^self concretizeLogicalShiftLeftRR].
[LogicalShiftRightRR] -> [^self concretizeLogicalShiftRightRR].
"ARM Specific Arithmetic"
[SMULL] -> [^self concretizeSMULL] .
[CMPSMULL] -> [^self concretizeCMPSMULL].
[MSR] -> [^self concretizeMSR].
"Data Movement"
[MoveCqR] -> [^self concretizeMoveCqR].
[MoveCwR] -> [^self concretizeMoveCwR].
[MoveRR] -> [^self concretizeMoveRR].
[MoveAwR] -> [^self concretizeMoveAwR].
[MoveRAw] -> [^self concretizeMoveRAw].
[MoveAbR] -> [^self concretizeMoveAbR].
[MoveRAb] -> [^self concretizeMoveRAb].
[MoveMbrR] -> [^self concretizeMoveMbrR].
[MoveRMbr] -> [^self concretizeMoveRMbr].
+ [MoveRM16r] -> [^self concretizeMoveRMbr].
[MoveM16rR] -> [^self concretizeMoveM16rR].
[MoveM64rRd] -> [^self concretizeMoveM64rRd].
[MoveMwrR] -> [^self concretizeMoveMwrR].
[MoveXbrRR] -> [^self concretizeMoveXbrRR].
[MoveRXbrR] -> [^self concretizeMoveRXbrR].
[MoveXwrRR] -> [^self concretizeMoveXwrRR].
[MoveRXwrR] -> [^self concretizeMoveRXwrR].
[MoveRMwr] -> [^self concretizeMoveRMwr].
[MoveRdM64r] -> [^self concretizeMoveRdM64r].
[PopR] -> [^self concretizePopR].
[PushR] -> [^self concretizePushR].
[PushCq] -> [^self concretizePushCq].
[PushCw] -> [^self concretizePushCw].
[PrefetchAw] -> [^self concretizePrefetchAw].
"Conversion"
[ConvertRRd] -> [^self concretizeConvertRRd]}!
Item was changed:
----- Method: CogARMCompiler>>shiftSetsConditionCodesFor: (in category 'testing') -----
shiftSetsConditionCodesFor: aConditionalJumpOpcode
"check what flags the opcdoe needs setting - ARM doesn't set V when simply MOVing"
^aConditionalJumpOpcode caseOf:
{ [JumpNegative] -> [true].
[JumpZero] -> [true].
+ [JumpLess] -> [true].
}
otherwise: [self halt: 'unhandled opcode in setsConditionCodesFor:'. false]!
Item was added:
+ ----- Method: CogARMCompiler>>strh:rn:plus:imm: (in category 'ARM convenience instructions') -----
+ strh: destReg rn: baseReg plus: u imm: immediate8bitValue
+ " STRH destReg, [baseReg, 'u' immediate8bitValue] u=0 -> subtract imm; =1 -> add imm "
+ ^self memM16xr: AL reg: destReg base: baseReg p: 1 u: u w: 0 l: 0 offset: immediate8bitValue!
Item was changed:
----- Method: CogOutOfLineLiteralsARMCompiler>>usesOutOfLineLiteral (in category 'testing') -----
usesOutOfLineLiteral
"Answer if the receiver uses an out-of-line literal. Needs only
to work for the opcodes created with gen:literal:operand: et al."
opcode
caseOf: {
[CallFull] -> [^true].
[JumpFull] -> [^true].
"Arithmetic"
[AddCqR] -> [^self rotateable8bitSignedImmediate: (operands at: 0) ifTrue: [:r :i :n| false] ifFalse: [true]].
[AndCqR] -> [^self rotateable8bitBitwiseImmediate: (operands at: 0)
ifTrue: [:r :i :n| false]
ifFalse: [1 << (operands at: 0) highBit ~= ((operands at: 0) + 1)]].
[AndCqRR] -> [^self rotateable8bitBitwiseImmediate: (operands at: 0)
ifTrue: [:r :i :n| false]
ifFalse: [1 << (operands at: 0) highBit ~= ((operands at: 0) + 1)]].
[CmpCqR] -> [^self rotateable8bitSignedImmediate: (operands at: 0) ifTrue: [:r :i :n| false] ifFalse: [true]].
[OrCqR] -> [^self rotateable8bitImmediate: (operands at: 0) ifTrue: [:r :i| false] ifFalse: [true]].
[SubCqR] -> [^self rotateable8bitSignedImmediate: (operands at: 0) ifTrue: [:r :i :n| false] ifFalse: [true]].
[TstCqR] -> [^self rotateable8bitImmediate: (operands at: 0) ifTrue: [:r :i| false] ifFalse: [true]].
[XorCqR] -> [^self rotateable8bitBitwiseImmediate: (operands at: 0) ifTrue: [:r :i :n| false] ifFalse: [true]].
[AddCwR] -> [^true].
[AndCwR] -> [^true].
[CmpCwR] -> [^true].
[OrCwR] -> [^true].
[SubCwR] -> [^true].
[XorCwR] -> [^true].
[LoadEffectiveAddressMwrR]
-> [^self rotateable8bitImmediate: (operands at: 0) ifTrue: [:r :i| false] ifFalse: [true]].
"Data Movement"
[MoveCqR] -> [^self rotateable8bitImmediate: (operands at: 0) ifTrue: [:r :i| false] ifFalse: [true]].
[MoveCwR] -> [^(self inCurrentCompilation: (operands at: 0)) not].
[MoveAwR] -> [^(self isAddressRelativeToVarBase: (operands at: 0)) ifTrue: [false] ifFalse: [true]].
[MoveRAw] -> [^(self isAddressRelativeToVarBase: (operands at: 1)) ifTrue: [false] ifFalse: [true]].
[MoveAbR] -> [^(self isAddressRelativeToVarBase: (operands at: 0)) ifTrue: [false] ifFalse: [true]].
[MoveRAb] -> [^(self isAddressRelativeToVarBase: (operands at: 1)) ifTrue: [false] ifFalse: [true]].
[MoveRMwr] -> [^self is12BitValue: (operands at: 1) ifTrue: [:u :i| false] ifFalse: [true]].
[MoveRdM64r] -> [^self is12BitValue: (operands at: 1) ifTrue: [:u :i| false] ifFalse: [true]].
[MoveMbrR] -> [^self is12BitValue: (operands at: 0) ifTrue: [:u :i| false] ifFalse: [true]].
[MoveRMbr] -> [^self is12BitValue: (operands at: 1) ifTrue: [:u :i| false] ifFalse: [true]].
[MoveM16rR] -> [^self rotateable8bitImmediate: (operands at: 0) ifTrue: [:r :i| false] ifFalse: [true]].
+ [MoveRM16r] -> [^self is12BitValue: (operands at: 1) ifTrue: [:u :i| false] ifFalse: [true]].
[MoveM64rRd] -> [^self is12BitValue: (operands at: 0) ifTrue: [:u :i| false] ifFalse: [true]].
[MoveMwrR] -> [^self is12BitValue: (operands at: 0) ifTrue: [:u :i| false] ifFalse: [true]].
[PushCw] -> [^(self inCurrentCompilation: (operands at: 0)) not].
[PushCq] -> [^self rotateable8bitImmediate: (operands at: 0) ifTrue: [:r :i| false] ifFalse: [true]].
[PrefetchAw] -> [^(self isAddressRelativeToVarBase: (operands at: 0)) ifTrue: [false] ifFalse: [true]].
}
otherwise: [self assert: false].
^false "to keep C compiler quiet"
!
More information about the Vm-dev
mailing list