[squeak-dev] A Benchmarking tool for the trunk?

Tobias Pape Das.Linux at gmx.de
Thu Apr 28 20:56:07 UTC 2016


On 28.04.2016, at 22:29, Levente Uzonyi <leves at caesar.elte.hu> wrote:

> On Wed, 27 Apr 2016, Tim Felgentreff wrote:
> 
>> Hi,
>> We have a proposal for a tool that we think might be useful to have in trunk.
>> We spent some time pulling together benchmarks from various sources (papers, the mailinglist, projects on squeaksource, ...) and combining them
>> with an extended version of Stefan Marr's implementation of a benchmarking framework SMark. The tool and framework are modeled after SUnit, and
>> include different execution suites and code to figure out confidence variations over multiple runs and such. Also, it draws graphs over multiple
>> runs so you can look at things like warmup and GC behavior, and see how much time is spent doing incremental GCs and full GCs vs plain execution.
>> As a part of this I fixed the EPS export so these graphs can be exported in a scalable format.
>> Here is a picture of the tool: https://dl.dropboxusercontent.com/u/26242153/screenshot.jpg
>> As I said, it's modeled after TestRunner and SUnit, benchmarks subclass from the "Benchmark" class, any method starting with "bench" is a
>> benchmark, and you can have setUp and tearDown methods as usual. By default the benchmarks are run under an Autosize runner that re-executes each
>> benchmark until the combined runtime reaches 600ms (to smooth out any noise). Beyond that, you can specify a number of iterations that the runner
>> will re-do that to see multiple averaged runs. The graph shows the execution times split between running code (gray) incremental GCs (yellow) and
>> full GCs (red). There are popups and you can scroll to zoom in and out. There is also a history of benchmark runs stored on the class side of
>> benchmark classes for later reference.
>> The code currently lives here: http://www.hpi.uni-potsdam.de/hirschfeld/squeaksource/BenchmarkRunner
> 
> The link seems to be broken.
> 

In how far?
It works from here :)

> Levente
> 
>> Considering we are every so often discussing benchmark results here, I think it might be useful to share an execution framework for those.
>> cheers,
>> Tim




More information about the Squeak-dev mailing list