It's a theoretical question, rather than practical concerning Exupery, but i'd like to know, what you know/thinking about special compiler heuristics concerning simplifying expressions, when do inlining.
The problem, in basic can be demonstrated by following example.
Suppose you having a code:
mainMethod object = 0 ifTrue: [ object method1 ] ifFalse: [ object method2]
and now method bodies:
method1 object = 0 ifTrue: [ do something] ifFalse: [ do something else ]
method2 object ~= 0 ifTrue: [ do something] ifFalse: [ do something else ]
Now, the question is, when you inlining method1/2 into mainMethod you can see, that tests in inlined methods become redundant. If we inline both methods, code will look like:
mainMethod object = 0 ifTrue: [ object = 0 ifTrue: [ do something] ifFalse: [ do something else ] ] ifFalse: [ object ~= 0 ifTrue: [ do something] ifFalse: [ do something else ] ]
Is there any techniques, which can help in removing such redundancy, when analyzing code by compiler?
and now method bodies:
method1 object = 0 ifTrue: [ do something] ifFalse: [ do something else ]
method2 object ~= 0 ifTrue: [ do something] ifFalse: [ do something else ]
oops, i mistaken, obviously method bodies should use self:
method1 self = 0 ifTrue: [ do something] ifFalse: [ do something else ]
method2 self ~= 0 ifTrue: [ do something] ifFalse: [ do something else ]
Igor Stasenko writes:
It's a theoretical question, rather than practical concerning Exupery, but i'd like to know, what you know/thinking about special compiler heuristics concerning simplifying expressions, when do inlining.
The problem, in basic can be demonstrated by following example.
Suppose you having a code:
mainMethod object = 0 ifTrue: [ object method1 ] ifFalse: [ object method2]
and now method bodies:
method1 object = 0 ifTrue: [ do something] ifFalse: [ do something else ]
method2 object ~= 0 ifTrue: [ do something] ifFalse: [ do something else ]
Now, the question is, when you inlining method1/2 into mainMethod you can see, that tests in inlined methods become redundant. If we inline both methods, code will look like:
mainMethod object = 0 ifTrue: [ object = 0 ifTrue: [ do something] ifFalse: [ do something else ] ] ifFalse: [ object ~= 0 ifTrue: [ do something] ifFalse: [ do something else ] ]
Is there any techniques, which can help in removing such redundancy, when analyzing code by compiler?
What you're talking about sounds very much like specialisation which was done by Craig Chambers in a Self VM before they started using dynamic type feedback.
The harder part in your case is figuring out what = should mean. There are 126 implementers of = in my image. That said, you could look in the PICs to see what classes have been used previously then just deal with those cases allowing for an un-optimised fall-back position. If we could guarantee that = was side effect free then specialisation would be a lot easier.
Bryce
On 26/12/2007, bryce@kampjes.demon.co.uk bryce@kampjes.demon.co.uk wrote:
What you're talking about sounds very much like specialisation which was done by Craig Chambers in a Self VM before they started using dynamic type feedback.
The harder part in your case is figuring out what = should mean. There are 126 implementers of = in my image. That said, you could look in the PICs to see what classes have been used previously then just deal with those cases allowing for an un-optimised fall-back position. If we could guarantee that = was side effect free then specialisation would be a lot easier.
Suppose, code provided is a code generated by compiler (to deal with oops) As for smallintegers, you can check a tag bit only once and then skip check in later code, because you already know that it's a smallinteger. It's also, can be possible for objects (if we know what = doing). In general, we can reduce the problem to proving that, if we got result from #= message and receiver is not changed, then we should expect to have similar result when we call #= later. So, we are free to replace send of #= by result of previous invocation.
Igor Stasenko writes:
On 26/12/2007, bryce@kampjes.demon.co.uk bryce@kampjes.demon.co.uk wrote:
What you're talking about sounds very much like specialisation which was done by Craig Chambers in a Self VM before they started using dynamic type feedback.
The harder part in your case is figuring out what = should mean. There are 126 implementers of = in my image. That said, you could look in the PICs to see what classes have been used previously then just deal with those cases allowing for an un-optimised fall-back position. If we could guarantee that = was side effect free then specialisation would be a lot easier.
Suppose, code provided is a code generated by compiler (to deal with oops) As for smallintegers, you can check a tag bit only once and then skip check in later code, because you already know that it's a smallinteger. It's also, can be possible for objects (if we know what = doing). In general, we can reduce the problem to proving that, if we got result from #= message and receiver is not changed, then we should expect to have similar result when we call #= later. So, we are free to replace send of #= by result of previous invocation.
If we're talking about type checking (e.g. the SmallInteger tag bit) then the optimisation is specialisation.
For Exupery, I'll probably only look to optimise such cases after inlining. And the more general optimisation will be left until after specialisation to remove type checking.
A common case could be doing arithmatic on floats from a float array, we'd want to specialise the code for the float array case then remove all but one checks to make sure it is a float array.
Bryce
exupery@lists.squeakfoundation.org