[Box-Admins] Re: [Board] Jens Lincke (from HPI) as new Trunk/Etoys developer

Chris Muller asqueaker at gmail.com
Sat Sep 3 22:53:28 UTC 2016

Hi Levente,

It looks like its taking as long as 15 minutes to index a single
>> VMMaker.oscog Version into Magma (see log output below).  With more than
>> 3500
>> Versions in VMMaker, my guess is this could take another few weeks!
>> So, I suggest that we go ahead and cut over to the new image using only
>> SSFilesystem.  I will run a separate Squeak VM process to finish loading
> If you do that, make sure you start the VM with
>         ionice -c3 nice squeak ...
> ionice -c3 will make it only use the disk when no other processes use it.
> nice affects CPU usage in a similar way.

Okay.  Hey, that's really nice to know about ionice, I have another system
suffering by an IO hog..

> It would still take months to process everything. I'm not sure it would
> work out. Perhaps we'd better wait for the new server, which will probably
> have better CPUs.

Yes, that makes absolute sense.  We only have until 1 Dec 2016, just 90
days, so we should just coincide this upgrade with that transition to

I would actually like to get started on that right away, do we actually
have a running server yet which I could log into?

The most efficient transfer possible would be directly from the Gandi to
the Rackspace server, does that sound right?  Is this an opportunity to
test the "restore" portion of our "Backup & Restore systems"?  IOW, in
theory, we should be able to deploy some "backup artifact" onto Rackspace
and simply, restore it..?  Do we have such a thing?

> If I undestand these logs correctly, then it took almost 82 minutes to
> process 10 mczs. That's about 8 minutes and 12 seconds per mcz instead of
> 15 minutes. But that's still quite a lot. "profile and optimize" :)

I know 15 minutes for one mcz sounds inefficient, but consider that
VMMaker.oscog mcz's each have about 14K MCDefinitions in them, which means
14K queries to a huge collection in Magma to determine if its already
present (and adding it if it isn't).  So:

      15 minutes / 14000       "0:00:00:00.064285714  per query"

6 hundredth's of a second is plenty fast enough for things like our history
query, but yes, pretty slow for bulk-loading 17GB worth of them.  *There is
so much duplication* between the mcz's...   I thought about some ideas like
trying to process 10 at a time, but finally just settled on simplicity and
patience.  I am open to suggestions though, even high-level ones..   :)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.squeakfoundation.org/pipermail/box-admins/attachments/20160903/a2b50f70/attachment.htm

More information about the Box-Admins mailing list