[Vm-dev] postponing tempVector creation

Eliot Miranda eliot.miranda at gmail.com
Wed Oct 16 16:47:17 UTC 2013


Hi Clement,


On Wed, Oct 16, 2013 at 5:53 AM, Clément Bera <bera.clement at gmail.com>wrote:

>
> Hello,
>
> Recently I realized that the tempVector is always created at the beginning
> of the method. In the case of:
>
> foo
>     | temp |
>     true ifTrue: [ ^ self ].
>     [ temp := 1 ] value.
>
> It means that even if the execution flow goes to the quick return
> direction, the tempVector is created even if it is not used. I hacked a bit
> the compiler to lazily create the tempVector when needed (access to temp
> Vector variable or block closure creation), which means here the tempVector
> would be created just before the pushClosure bytecode and not at the
> beginning of the method.
>
> I got as result that this specific method was executed 50,000,000 times
> per second instead of 21,000,000 times per second. Probably because of some
> useless garbage collection avoided.
>
> The problem of this lazy tempVector initialization is that I never know
> where I am in the control flow graph. This means than in the case:
>
> bar
>     | temp |
>     false ifTrue: [ temp := 1 ].
>     [ temp := 2 ].
>
> It is hard to detect where to add the extra bytecode for lazy tempVector
> initialization, because the first time you need it is in a branch, so if
> you create the tempVector in the branch and the execution flow goes in the
> other direction then the tempVector will not have been initialized.
>
> Has any of you already investigated in this direction ? Does it worth it
> to implement that ?
>

My philosophy of optimization is that one should optimize what looks to be
expensive.  I didn't always operate like this.  When I was younger working
on BrouHaHa I would think, "yes, that must be slow", go implement, and then
find no difference in performance.  For example, I added setters as quick
methods.  I had fun, and I learned how to implement stuff, but the real
learning was that one should attack the largest costs.  Now if you see some
case where temp vector creation really slows things down then by all means
try and optimize.  But I (and I mean me) wouldn't waste my time until I saw
a significant cost due to temp vector creation.

Note that Spur, in making allocation much cheaper, makes it feasible to
speed up temp vector creation a lot.  And if Spur is a factor of two faster
then temp vector creation (which calls into the interpreter currently) will
be twice as expensive until it is optimized.  So it could be even more
worth while to do in the Spur JIT.  Then it would be interesting to compare
deferring temp vector creation vs fast machine-code temp vector creation.


>
> Thanks for any answers.
>
>


-- 
best,
Eliot
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.squeakfoundation.org/pipermail/vm-dev/attachments/20131016/120db4be/attachment.htm


More information about the Vm-dev mailing list