<p dir="ltr">Hi David, </p>
<p dir="ltr">sure, I can easily add the Stack Vm once we have it built on every commit from Github :) </p>
<p dir="ltr">cheers, <br>
Tim</p>
<div class="gmail_quote">Am 11.06.2016 3:16 vorm. schrieb "David T. Lewis" <<a href="mailto:lewis@mail.msen.com">lewis@mail.msen.com</a>>:<br type="attribution"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">On Fri, Jun 10, 2016 at 06:54:24AM -0700, timfelgentreff wrote:<br>
> I added the VM today, and then realized I haven't built in a variation point<br>
> for choosing the image - and I only have a script to build a Spur image<br>
> right now. So I regret that this wasn't (as I first assumed) a thing of 2<br>
> minutes. But I'll get to it.<br>
><br>
<br>
Let me know if I can help. Attached is a script for building the VM from<br>
latest SVN sources (the install step is commented out).<br>
<br>
Choosing the image might be harder. ckformat can be used to select a VM<br>
for an image, but the interpreter VM can only run a V3 image (not Spur),<br>
so maybe the comparison is not so meaningful. I have been maintaining a V3<br>
mirror of Squeak trunk (<a href="http://build.squeak.org/job/FollowTrunkOnOldV3Image/" rel="noreferrer" target="_blank">http://build.squeak.org/job/FollowTrunkOnOldV3Image/</a>)<br>
but I will not be maintaining long term (only a few more months or so).<br>
<br>
On balance, maybe it is better to use a Stack interpreter VM as the baseline.<br>
This should be similar enough to the context interpreter VM, and it will<br>
run Spur images, so that may be good as a baseline. It would have been nice<br>
to say that "Cog is X times faster than the original interpreter VM" but<br>
comparing to StackInterpreter may be close enough.<br>
<br>
Dave<br>
<br>
<br>
><br>
> marcel.taeumel wrote<br>
> ><br>
> > David T. Lewis wrote<br>
> >> This really looks very useful, and the graphs and trend lines are nice<br>
> >> for<br>
> >> visualization.<br>
> >><br>
> >> Would there be any value in adding an interpreter VM as a baseline to<br>
> >> show<br>
> >> Cog/Spur/RSqueak compared to a non-optimized VM?<br>
> >><br>
> >> Dave<br>
> >><br>
> >><br>
> >>> Hi,<br>
> >>><br>
> >>> I sent around a note earlier about a benchmarking tool that we're using<br>
> >>> internally to track RSqueak/VM performance on each commit. Every time<br>
> >>> Eliot<br>
> >>> releases a new set of Cog VMs, I also manually trigger the system to run<br>
> >>> benchmarks on Cog. (Once we move the proper VM to Github, I will set it<br>
> >>> up<br>
> >>> so we test each commit on the main development branch and the release<br>
> >>> branch, too, so we will have very detailed breakdowns.) We wanted to<br>
> >>> share<br>
> >>> this setup and the results with the community.<br>
> >>><br>
> >>> We're collecting results in a Codespeed website (just a frontend to<br>
> >>> present<br>
> >>> the data) which we moved to <a href="http://speed.squeak.org" rel="noreferrer" target="_blank">speed.squeak.org</a> today, and it is also<br>
> >>> linked<br>
> >>> from the <a href="http://squeak.org" rel="noreferrer" target="_blank">squeak.org</a> website (<a href="http://squeak.org/codespeed/" rel="noreferrer" target="_blank">http://squeak.org/codespeed/</a>).<br>
> >>><br>
> >>> We have some info about the setup on the about page:<br>
> >>> <a href="http://speed.squeak.org/about" rel="noreferrer" target="_blank">http://speed.squeak.org/about</a>. On the Changes tab, you can see the most<br>
> >>> recent results per platform and environment, with details about the<br>
> >>> machines on the bottom. Note that we calculate all the statistics on the<br>
> >>> workers themselves and only send the time and std dev, so the results'<br>
> >>> min<br>
> >>> and max values you see on the website are bogus.<br>
> >>><br>
> >>> Finally, the code for the workers also is on Github (<br>
> >>> <a href="https://github.com/HPI-SWA-Lab/RSqueak-Benchmarking" rel="noreferrer" target="_blank">https://github.com/HPI-SWA-Lab/RSqueak-Benchmarking</a>) and the Benchmarks<br>
> >>> are<br>
> >>> all organized in Squeaksource (<br>
> >>> <a href="http://www.hpi.uni-potsdam.de/hirschfeld/squeaksource/BenchmarkRunner.html" rel="noreferrer" target="_blank">http://www.hpi.uni-potsdam.de/hirschfeld/squeaksource/BenchmarkRunner.html</a>).<br>
> >>> Right now I've just dumped benchmarks from various sources in there,<br>
> >>> that's<br>
> >>> why you see the same benchmark implemented multiple times in different<br>
> >>> ways, and some micro benchmarks don't make too much sense as they are.<br>
> >>> We're happy to get comments, feedback, or updated versions of the<br>
> >>> benchmarking packages. Updating the benchmarking code is easy, and we<br>
> >>> hope<br>
> >>> this setup proves to be useful enough for the community to warrant<br>
> >>> continuously updating and extending the set of benchmarks.<br>
> >>><br>
> >>> We are also planning to add more platforms, the setup should make this<br>
> >>> fairly painless, we just need the dedicated machines. We've been testing<br>
> >>> the standard Cog/Spur VM on a Ubuntu machine, and today we added a<br>
> >>> Raspberry Pi 1 that is still churning through the latest Cog and<br>
> >>> RSqueak/VM<br>
> >>> commits. We'd like to add a Mac and a Windows box, and maybe SqueakJS<br>
> >>> and<br>
> >>> other builds of the Squeak VM, too.<br>
> >>><br>
> >>> Cheers,<br>
> >>> Tim<br>
> >>><br>
> >>><br>
> > +1<br>
> ><br>
> > Best,<br>
> > Marcel<br>
><br>
><br>
><br>
><br>
><br>
> --<br>
> View this message in context: <a href="http://forum.world.st/A-speedcenter-for-Squeak-tp4899946p4900414.html" rel="noreferrer" target="_blank">http://forum.world.st/A-speedcenter-for-Squeak-tp4899946p4900414.html</a><br>
> Sent from the Squeak - Dev mailing list archive at Nabble.com.<br>
<br><br>
<br></blockquote></div>