[Vm-dev] Cross Language Benchmarking

Stefan Marr smalltalk at stefan-marr.de
Mon Jul 25 11:38:04 UTC 2016


Hi:

A while ago, I already posted some numbers comparing the CogVM with Java [1].

Now I got a few more things included:

As a plot:
https://ibin.co/2pGeHIPlIWua.png


As numbers:
	Runtime Factor over Java
	geomean	sd	min	max	median
Java	1.00	0.00	1.00	1.00	1.00
Crystal	1.85	2.45	0.79	8.90	1.49
SOMns	2.00	0.77	0.93	3.21	1.97
Node.js	2.89	3.33	1.14	12.25	2.54
Pharo	7.31*	4.28	3.45	inf	6.64
MRI	45.62	19.48	19.37	81.00	43.98

This is based on my Are We Fast Yet collection of benchmarks [2].
They are specifically designed to benchmark compiler effectiveness across languages, which is documented with guidelines on github.
To be more representative than the typical micro benchmarks or the language benchmarking game, the set includes 5 benchmarks with multiple hundred lines of code: CD, DeltaBlue, Havlak, Json, and Richards.

From the last time, I remember comments like “yeah, this should be a variable caching a result, and this should be rewritten like this”, etc. My goal is to measure nicely factored and written code, not code that is written to make the job for the compiler easier. So, all of these things are left in explicitly as a challenge to the compiler.

Of course this means, you need to take the numbers with a grain of salt, and they don’t necessarily show how fast or slow an application on the CogVM is that would be optimized specifically for it.
Instead, the numbers are giving an intuition of how much room for improvement is left for Cog (and perhaps Sista at some point).

Note further, the CogVM does not handle the CD benchmark well, it is something like 4 to 5 orders of magnitude slower.
So, while it says about that CogVM (based on get.pharo.org/vm50) is about 7.3x slower than Java, this excludes CD because the run didn’t finish.


In case you are interested in using the benchmarks, see
https://github.com/smarr/are-we-fast-yet/tree/master/benchmarks/Smalltalk for the code.

To keep the benchmarks in the same file syntax as SOM, they are loaded with a SOM parser when the image is build: build-image.st
The whole setup is created with /implementations/build-pharo.sh.
And, the benchmarks are executed via ReBench and the /rebench.conf.
Other details that might be helpful are documented via the Travis CI setup.

Best regards
Stefan


[1] http://forum.world.st/Some-Performance-Numbers-Java-vs-CogVM-vs-SOM-td4817800.html
[2] https://github.com/smarr/are-we-fast-yet

-- 
Stefan Marr
Johannes Kepler Universität Linz
http://stefan-marr.de/research/





More information about the Vm-dev mailing list