Berserk idleLoop in event handling?

John M McIntosh johnmci at
Sat Jan 10 08:01:24 UTC 2004

On Jan 9, 2004, at 5:52 PM, Tim Rowledge wrote:

> John M McIntosh <johnmci at> wrote:
>> Tim, have you looked at doing something in the millisecond clock call?
> Not until recently but it will have to go on my list I guess. Platforms
> that (ab)use UI polling to run their entire multitasking regime are a
> real pain in the neck sometimes!
> Don't forget that checkForInterrupts() does attempt to make sure that
> ioProcess() is called at least once every 500mS. We could make that
> more frequent at some cost in apparent benchmark performance. For
> UIequalsOS task switching platforms this is a useful place to ensure at
> least some UI polling.

It's more frequent, please read on.

> Oddly enough the .h comment about ioProcessEvents() says:-
> /* Note: In an event driven architecture, ioProcessEvents is obsolete.
>    It can be implemented as a no-op since the image will check for
>    events in regular intervals. */
> but all four main platforms do actually appear to do event fetching
> within it.
> I note that checkForInterrupts is currently involved in a bizarre
> pseudo-feedback algorithm relating to interruptCheckCounter et al.
> Wouldn't this be better as a platform specific macro or routine? Does
> anyone recall the logic behind it?

I'm responsible for the "bizarre pseudo-feeback algorithm" that tries  
to ensure the routine gets called no more than  2 or 3 milliseconds.  
{Excluding windows (I think)} This time between calls is a instance var  
you could set.

When I was benching marking a 200Mhz powerbook back in Nov of 1999 I  
noticed that checkForInterrupts was
called very frequently because it's based on a counter that is modified  
based on the byte code stream, plus other things. As machines got  
faster why the amount of calls to this routine would become very high.  
Now since the routine would call the mac millisecond clock which at the  
time was expensive, it was evident that as CPU mhz increased we could  
spend a lot of time spinning here. So this  change was introduced in  
the summer of 2000 giving us a 2% performance improvement.

Also I'm responsible for the logic in the mac and unix implementation  
int ioRelinquishProcessorForMicroseconds(int us)
  that considers the next wakeup time to calculate what the optimal  
sleep time  should be and ignores the incoming value. Now I had tried  
to move that into the  smalltalk code, but (sigh) that was a disaster  
if people can remember (hope not),  since a little bug related to a  
critical block resulted in the lowest priority process being not  
running, causing the scheduler to invoke exit() since nothing is  
run-able. However this change results in better Delay wait accuracy,  
and perhaps a lower CPU usage at squeak idle time.

PS if I remember correctly the Windows implementation just forces a  
call to checkForInterrupts every ms based on some timer pop that alters  
the InterruptCheckCounter.

> tim
> --
> Tim Rowledge, tim at,
> Earth is 98% full...please delete anyone you can.
John M. McIntosh <johnmci at> 1-800-477-2659
Corporate Smalltalk Consulting Ltd.

More information about the Squeak-dev mailing list