[squeak-dev] A speedcenter for Squeak

Stefan Marr smalltalk at stefan-marr.de
Fri Jun 10 16:22:23 UTC 2016


Hi Tim:

Out of curiosity, where exactly did you take the GraphSearch and Json benchmarks from?
Are they consistent wit the latest versions on https://github.com/smarr/are-we-fast-yet/tree/master/benchmarks/SOM?

Just wondering. Of course it would also be interesting to have a highly optimized Smalltalk like TruffleSOM on there, just to keep people motivated to reach some state-of-the-art performance ;)

Btw, I recently added the Collision Detection and Havlak benchmarks to AWFY. Those are additional larger benchmarks on the level of Richards and DeltaBlue. They should be a little more representative then microbenchmarks.

Best regards
Stefan

> On 08 Jun 2016, at 16:43, Tim Felgentreff <timfelgentreff at gmail.com> wrote:
> 
> Hi,
> 
> I sent around a note earlier about a benchmarking tool that we're using internally to track RSqueak/VM performance on each commit. Every time Eliot releases a new set of Cog VMs, I also manually trigger the system to run benchmarks on Cog. (Once we move the proper VM to Github, I will set it up so we test each commit on the main development branch and the release branch, too, so we will have very detailed breakdowns.) We wanted to share this setup and the results with the community.
> 
> We're collecting results in a Codespeed website (just a frontend to present the data) which we moved to speed.squeak.org today, and it is also linked from the squeak.org website (http://squeak.org/codespeed/).
> 
> We have some info about the setup on the about page: http://speed.squeak.org/about. On the Changes tab, you can see the most recent results per platform and environment, with details about the machines on the bottom. Note that we calculate all the statistics on the workers themselves and only send the time and std dev, so the results' min and max values you see on the website are bogus.
> 
> Finally, the code for the workers also is on Github (https://github.com/HPI-SWA-Lab/RSqueak-Benchmarking) and the Benchmarks are all organized in Squeaksource (http://www.hpi.uni-potsdam.de/hirschfeld/squeaksource/BenchmarkRunner.html). Right now I've just dumped benchmarks from various sources in there, that's why you see the same benchmark implemented multiple times in different ways, and some micro benchmarks don't make too much sense as they are. We're happy to get comments, feedback, or updated versions of the benchmarking packages. Updating the benchmarking code is easy, and we hope this setup proves to be useful enough for the community to warrant continuously updating and extending the set of benchmarks.
> 
> We are also planning to add more platforms, the setup should make this fairly painless, we just need the dedicated machines. We've been testing the standard Cog/Spur VM on a Ubuntu machine, and today we added a Raspberry Pi 1 that is still churning through the latest Cog and RSqueak/VM commits. We'd like to add a Mac and a Windows box, and maybe SqueakJS and other builds of the Squeak VM, too.
> 
> Cheers,
> Tim
> 



More information about the Squeak-dev mailing list