Squeak 2.0 for Debian GNU/Linux

Maloney johnm at wdi.disney.com
Thu Jul 9 08:15:58 UTC 1998


Re:
>On 8 Jul 1998, Eric Marsden wrote: 
> 
>> Good news! Have you fixed the problem I have with Squeak eating up 99%
>> of the CPU (at least on Linux/x86 and IRIX)? MacOS and Windoze users
>> might not care much, but it's not very nice on multi-user systems. 
>> 
>> By strace()ing it seems the vm is looping with endless gettimeofday() 
>> calls, and from the vm source it would seem that the ioMsecs is the
>> culprit. Surely this can be avoided? 
> 
>I'm not positive about this, but I believe that Squeak's CPU hogging is a
>result of it being based on a polling architecture, as opposed to being
>event-driven.  In other words, Squeak regularly checks (polls) for
>mouse/keyboard/other events, which chews up the CPU. (please correct me if
>I'm wrong here) 
> 
>I know that VisualWorks switched from polling to an event-driven
>architecture around the time of 2.5.2, and this helped the idle CPU usage
>problem (although I also heard that switching to event-driven caused a lot
>of bugs/problems). 
> 
>Anyway, I'm curious about whether there are any plans for an event-driven
>version of Squeak, and how closely it's related to the CPU usage problem. 


Polling versus event-driven is only part of the story. For
example, you could be watching an animation or simulation
run. In this case, you'd want Squeak to be getting CPU cycles
even though it wasn't receiving input events. So, the real
issue is (a) knowing when the user isn't really using Squeak
and (b) making the Squeak virtual machine give up the CPU
in that case.

On the Mac, we detect when Squeak is not the "foreground"
application", and turn off input polling. It still runs
in the background, but it uses very little of the processor
if it is just idling. (I've measured it.)

There is also a primitive that allows Squeak to give
up the CPU. This primitive is invoked by the Squeak
idle thread. This primitive essentially says "go to sleep for
a while, unless an interrupt occurs". In a server, the interrupt
is typically an incoming socket connection. However, this mechanism
can't come into play unless all other threads go to sleep
and, as you observe, both Morphic and MVC have polling loops
that prevent that from happening.

There is a very simple quick fix which was quite effective in
the Self version of Morphic (which ran under Unix). The idea
is to add a "governer" to the basic Morphic event loop. The governer
works by setting an upper bound on the number of UI cycles per
second and going to sleep for any extra time. For example, you
might decide that you'd be happy with a maximum of 25 frames per
second. Then, in the event loop you measure the time required
to process any user inputs, run any ongoing animations, and update
the screen. If that takes less than 1/25th of a second, you
wait on a Delay for the remainder of that time slice. This
allows the idle task to run, which relinquishes the CPU to other
Unix processes. Yes, there is still a bit of overhead. But if
you're not actually doing any animation, this overhead is only
a few percent of the CPU. The Unix "synch" process probably takes
more cycles.

I'm assuming that the Unix Squeak VM supports the primitive that
relinquishes the processor. I think Ian said that it did, but
you can always check the source code to be sure.

If you are willing to help test this on a Unix machine, I can
give you some code to try.

	-- John





More information about the Squeak-dev mailing list