Benchmarks for Comparing Squeak Builds

Steve Wart swart at home.com
Sun Jan 24 03:58:19 UTC 1999


It would be interesting to see results for Java and VB.

> -----Original Message-----
> From: Greg & Cindy Gritton [mailto:gritton at ibm.net]
> Sent: January 23, 1999 7:23 PM
> To: squeak at cs.uiuc.edu
> Subject: Re: Benchmarks for Comparing Squeak Builds
> 
> 
> At 01:14 PM 1/23/99 -0500, you wrote:
> >I'm messing arounds with MPW builds of Squeak, using various types and 
> >levels of optimization.  Got the thing running (at last) consistently with 
> >all capabilities (some significant reorganization of interp.c is necessary 
> >to compile under -opt speed; some magic is necessary to get named 
> >primitives internal to the VM to work right), but I'm concerned that the 
> >supposedly "highly optimizing" MrC may not have produced the best results 
> >possible.
> >
> >Which benchmarks, whether or not found in the standard image, have people 
> >found to be most meaningful/relevant to evaluation of a squeak build? 
> >Unsurprisingly, I have found the several benchmarks yielding quite 
> >different results?
> >
> >Is there a benchmark suite people rely upon to evaluate whether a change is 
> >an improvement?  If not in the standard image, where would I found them.
> >
> >
> >
> 
> I have a set of benchmarks that I have used to compare the speed of
> verious Smalltalk implementations with C.  These are based on the 
> Stanford Integer benchmarks and were originally put online
> as part of the Self distribution.  They are derived from C benchmarks
> so they are heavy array accessing.  I also have the old
> slopstone and smopstone benchmarks.  (Squeak won't run smopstones.)
> 
> I could post or mail the benchmarks if anyone is interested.
> In addition to Squeak I have versions for Smalltalk-V, Visual Works,
> Smalltalk MT, and C.  I have ported a couple of the benchmarks to
> Python.  The grand total is under 100K of source files.
> 
> Sincerely,
> 
> Greg Gritton
> 
> 
> 
> Some interesting results of various Smalltalk's
> 
>   Benchmark         V  V-try2 Dolphin   MT  Squeak Sq-JIT Sq2.3  VW3.0    C
>   Python 
>                                  (Time in ms)
> BubbleSort         880   880    902    259   1349   1048   1086    245
> 21   3479
> BubbleSort2        880   820    805    560   2083   2083   1107    206
> 21   3479
> IntMM             1310   770    428    135    745    564    934    124    17
> IntMM2            1160  1210    650    197   1655   1634   1458    153    17
> MM                 990  1260    741   1590   1497   2341    934    357    22
> MM2               1260   940    660   2090   1721   2765   1289    367    22
> Perm               930   990    701    136*   961    728    815    133
> 21   2630
> Perm2              720   770    675    239   1273   1207    801    120    21
> Queens             380   380    375     80    632    458    522     71    11
> Queens2            280   280    285     59    468    327    555     67    11
> Quicksort          550   490    384    105    599    485    481     95
> 11   1555
> Quicksort2         490   500    365    199*   815    853    502     99
> 11   1555
> Towers             550   550    930    215   1266    996   1010    171
> 22   3157
> Towers2            330   330    538    118    756    623    557     79
> 22   3157
> Puzzle            4720  2780   7695   1490  15591  13527  12789   1679    77
> 
> Total            15340 14950  16134   7472  31411  29639  25393   3966   305
> Fixed             1242  1194   1183          2176   2000   1727    277    28
> Float             1571  1515   1388          2646   2745   2138    383    54
> 
> * Did not run correctly
> 
> Versions
>   Smalltalk-V (from Smalltalk Express - version ?)
>   Squeak 2.0, Squeak 2.3
>   Dolphin 2.1, patch level 2
>   Smalltalk MT 1.5
>   VisualWorks NonCommercial 3.0
> 
> All runs are on a 100MHz Cyrix 6x86 PR120 with 512K of L2 cache, 24M of RAM. 
> 
> 
> 





More information about the Squeak-dev mailing list