[Vm-dev] [Pharo-dev] Fed up
jan.vrany at fit.cvut.cz
Thu Jan 23 10:35:43 UTC 2020
> 2) the lack of inline caches for #perform: (again, I am just guessing in
> > this case).
> Right. There is only the first level method lookup cache so it has
> interpreter-like performance. The selector and classs of receiver have to
> be hashed and the first-level method lookup cache probed. Way slower than
> block activation. I will claim though that Cog/Spur OpenSmalltalk's JIT
> perform implementation is as good as or better than any other Smalltalk
> VM's. IIRC VW only machine coded/codes perform: and perform:with:
Do you have a benchmark for perform: et. al.? I'd be quite interested.
Last time I was on this topic, I struggled to come up with a benchmark
that would represent any hope-to-be-like-real-workload benchmark (and whose
results I could interpret :-)
More information about the Vm-dev