[Vm-dev] Call for big benchmarks

Eliot Miranda eliot.miranda at gmail.com
Sat Mar 25 01:49:54 UTC 2017


Hi Tim,

On Fri, Mar 24, 2017 at 1:10 AM, Tim Felgentreff <timfelgentreff at gmail.com>
wrote:

>
> Hi Eliot,
>
> the question for me is, how indicative is this workload of real world
> performance? Creating compiled methods may not be something that is highly
> optimized, simply because it doesn't need to be in real applications. One
> would have to be careful about what is being measured, or if the benchmark
> is just measuring how fast we can blow out the caches...
>
> If we're just talking about running parsing and optimizing something, then
> maybe some real world applications are using that, but even then some JSON
> or HTML parsing library that implements e.g. Apache mod_rewrite would be
> more realistic, I think. Dynamically parsing and patching HTML and then
> pretty-printing or minimizing it seems a more common problem.
>
> I know, you're trying to argue that the Opal compiler may show common
> workloads equally well, but we could argue that for some of the Shootout
> benchmarks, too. It's an argument that doesn't seem to convince some people.
>

I don't care in this case.  I'm happy to include those other benchmarks.  I
just want to include one Smalltalk-centric benchmark that exemplifies
excellent Smalltalk style and that uses the language to its full extent.
One is likely to find that in a Smalltalk compiler; it's written by experts
in the language, and I know that the Opal compiler is particularly clean.
The point here is to have a benchmark that shows how well Scorch/Sista
optimists an exemplary Smalltalk workload, not a generic workload to enable
comparisons between languages.  I'm as interested in seeing how fast
Scorch/Sista is w.r.t. the Interpreter, the StackInterpreter, the V3 Cog VM
and the Spur Cog VM, as in seeing how fast generic benchmarks are compared
to other language implementations.

In any case I wouldn't be interested in installing the methods that the
benchmark compiler would generate, only interested in its source ->
compiled method transformation.  Installing measures all sorts of JIT
related hacks that are irrelevant to compute performance.

And of course, taking a snapshot of a specific Smalltalk compiler at a
given point in time and sticking with it gives us much less of a moving
target (collections etc will still affect things).


>
> Eliot Miranda <eliot.miranda at gmail.com> schrieb am Do., 23. März 2017,
> 17:18:
>
>>
>> Hi Tim,
>>
>> On Thu, Mar 23, 2017 at 1:31 AM, Tim Felgentreff <
>> timfelgentreff at gmail.com> wrote:
>>
>>
>>
>> Yes, big benchmarks would be nice. Those on speed.squeak.org or in
>> VMMaker are all somewhat small.
>>
>> Note the Ruby community, for example, has benchmarks such as a NES
>> emulator (optcarrot) that can run for a few thousand frames with predefined
>> input as benchmarks. It's definitely possible.
>>
>> Maybe some of the projects from HPI students could be made to work, there
>> was a Chip8 emulator in Squeak, for example, that seems big enough. Or
>> maybe the DCPU emulator at github.com/fniephaus/BroDCPU without a frame
>> limit would work as a decent CPU bound benchmark.
>>
>>
>> I've discussed with Clément doing something like cloning the Opal
>> compiler, or the Squeak compiler, so that it uses a fixed set of classes
>> that won't change over time, excepting the collections, and using as a
>> benchmark this compiler recompiling all its own methods.  This is a nice
>> mix of string processing (in the tokenizer) and symbolic processing (in the
>> building and optimizing of the parse tree).
>>
>> Cross - dialect could be hard. Pharo and Squeak are fairly easy to do,
>> but with larger programs staying compatible across different dialects is
>> harder.
>>
>>
>> Again, extracting a compiler from its host system would make it possible
>> to maintain a cross-platform version.  It could be left as an exercise to
>> the reader to port it to one's favorite non-Smalltalk dynamic language.
>>
>> tim Rowledge <tim at rowledge.org> schrieb am Mi., 22. März 2017, 21:40:
>>
>>
>>
>> > On 21-03-2017, at 4:53 PM, Javier Pimás <elpochodelagente at gmail.com>
>> wrote:
>> >
>> > Hi everybody! While measuring performance I usually face the problem of
>> assessing performance.
>>
>> Have you tried the benchmarks package - CogBenchmarks - included in the
>> source.squeak.org/VMMaker repository?
>>
>> tim
>> --
>> tim Rowledge; tim at rowledge.org; http://www.rowledge.org/tim
>> Strange OpCodes: BOMB: Burn Out Memory Banks
>>
>>
>>
>>
>>
>>
>> --
>> _,,,^..^,,,_
>> best, Eliot
>>
>
>


-- 
_,,,^..^,,,_
best, Eliot
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.squeakfoundation.org/pipermail/vm-dev/attachments/20170324/6b7c895a/attachment-0001.html>


More information about the Vm-dev mailing list