Thue-Morse and performance: Squeak v.s. Strongtalk v.s. VisualWorks

Klaus D. Witzel klaus.witzel at cobss.com
Sun Dec 17 12:24:37 UTC 2006


Thank you Michael for your illustrative response.

I had taken most of the steps you mention, before posting, but the one  
with stress on a small PIC size was rather unexpected. Perhaps it would be  
interesting to find out the actual limit and try again. But the  
performance of this mini-morphic situation is not convincing.

I intentionally used instances of classes which inherit from each other:  
this is the typical situation when processing collections - regardless of  
using Traits. And yes, as you mention it csn be interesting to have actual  
different *implementations* of a message. But I doubt that there will be  
remarkable difference, since the methodCache is per receiver class (and so  
IMO there is no change for the example in my previous post).

Thanks again.

/Klaus

On Sun, 17 Dec 2006 12:04:21 +0100, Michael Haupt wrote:

> Hi Klaus,
>
> I haven't yet reproduced the benchmarks, but I'd never judge the
> performance of an entire implementation based on one single simplistic
> benchmark. Sorry, this sounds very negative, I don't mean to pick on
> you.
>
> The benchmark you have run is a micro-benchmark that measures the
> performance of just one point of interest. Basically, it measures the
> performance of sending #yourself to various different objects,
> instances of 8 different classes. I believe that #yourself is
> implemented in Object and never overwritten.
>
> (Having 8 different classes doesn't exceed typical PICs; they normally
> have 8 entries, if I'm not mistaken. The benchmark could really stress
> the VM if much more than 8 different classes were chosen - but in the
> end, it would be more interesting to have actual different
> *implementations* of a message, because the VM can quite easily
> determine that the implementation for #yourself is the same for all
> objects.)
>
> In a nutshell, micro-benchmarks are fine but should be more diverse.  
> Measure
> - monomorphic call sites (just one target),
> - polymorphic call sites (small number of different targets), and
> - megamorphic call sites (very large number of different targets).
>
> The results of all of these together would tell more.
>
> Also, an optimising VM normally takes some time to start optimising -
> before the adaptive optimisation logic sees that there are some "hot
> spots", usually the interpreter has to execute stuff for some time. Of
> course, this doesn't hold for Squeak.
>
> And once the VM has started optimising, there is still some impact due
> to optimisation (it consumes time as well!). You normally let the
> benchmark run several times until you can be sure that the VM has
> applied all optimisations and measure the performance yielded by this
> "steady state". This results in numbers that report only actual
> performance instead of VM and optimisation interference.
>
> I wonder whether there is something like SPECjvm98 for Smalltalk systems.
>
> Of course, we also shouldn't forget that Strongtalk has not been
> developed for some 10 years now, whereas VisualWorks has been
> constantly maintained by at least one VM guru. ;-)
>
> Best,
>
> Michael
>
>





More information about the Squeak-dev mailing list