[BUG][FIX] interrupt driven EventSensor ( could somebody provide detailed review, please? )

Lex Spoon lex at cc.gatech.edu
Fri Aug 1 19:07:42 UTC 2003


[Andreas wrote:]
> The intended use of the input semaphore is for multi-threaded VMs *only*.
> Signalling the input semaphore unless you're running multi-threaded is
> currently pretty pointless. If the Unix-VM does it anyway, then it's a
> problem of the Unix VM not of EventSensor.

[Tim Rowledge]
> I think you missed the thrust of my comment here; it's not that the vm
> shouldn't be signalling inputSemaphore but that signalling it for each
> event is 'wasteful' for some value of 'wasteful'. Since all available
> events are sucked out of the VM for a single decrement of the
> semaphore's signals it is pointless to go round again and again merely
> to use up those signals. I suppose one possible fix would be to alter
> the VM to only set the number of signal to 1 rather than incrementing
> it.

There's an even simpler solution: call initSignals before sucking the
events out of the VM.  Problem solved.  I wish anyone who really cares
about hte number of primitive calls, would try this simple and direct
approach before implementing something more complicated.

This pattern applies in general to events coming in from the VM, e.g.
sound and sockets.  You probably do want to poll as many events as
possible per semaphore signal, so that you don't require the VM to send
exactly the number of semaphore signals as input events.  You also want
to stop polling after the VM starts returning nothing, and wait for
another signal to come in.  The initSignals pattern lets you do both of
these.



Incidentally, it is not cheap for an XWindows client to poll for events
when the queue is empty, because you do a system call to check whether
the socket has new data.


Finally, it is disturbing that it would be *wrong* for the Unix VM to
signal that an event has arrived, at the time that it notices that an
event has arrived.  Should the VM really have to be careful about
whether a get-event primitive is currently executing when it notices the
event coming in?  It seems much simpler to put in the initSignals and to
give the VM some slack.  VM code is much harder than Smalltalk code.



John M McIntosh <johnmci at smalltalkconsulting.com> wrote:
> b) If mouse information is only updated every 17ms or 10ms? does it  
> make sense to read them 10 times per ms?
> (hint as far as I can tell on the mac updates sensor data about every  
> 17ms)
> Table PC (if any exist) do they update every 5ms, every ms?  do we care?
> 

IMHO, if someone is busy-polling the sensor then it is acceptible to
spike the CPU as they have requested.

On the other hand, scattering Delay's all around seems dangerous.  Just
imagine down the line the poor sucker (probably one of the people
reading this thread!) who has to undo the delay in just some cases the
method is called.  It sounds like an ever growing mess as we try to
figure out which calls get the delay and which ones do not.

Hey, didn't MacOS 8 or 9 or automatically lower the CPU usage of any
process that is busy polling for events?  Squeak people hated this.  :) 
But it seems like what a delay in the polling method would accomplish.

Lex



PS -- synchronization code sucks, doesn't it!



More information about the Squeak-dev mailing list