Berserk idleLoop in event handling?
andreas.raab at gmx.de
Fri Jan 9 02:08:55 UTC 2004
> > No. If you poll for events when asked for new ones (via ioGetNextEvent
> > the state based primitive set) the idle loop really is only ensuring we
> > respond in a more timely manner.
> Well yes but it can devolve to depending on the delay set to
> EventPollFrequency (which is so poorly named it's funny - it's the
> inverse) mSecs. Currently that is typically 500 mSecs so it can get as
> bad as a half second latency just from that. If some other idleloop
> were setup that failed to call the relinquishCPU prim (and we both know
> that someday someone will do that) we would be irritated by such a
You won't ever see that latency. When we see an empty event queue we
stimulate the input semaphore which will force ioProcess to run right away
(since ioProcess is running at higher priority). IOW, whenever you query for
an event we make sure that if there is one you get one. Trust me, this stuff
> For a start we still have InputSensor and EventSensor rather than a
> single class, yet we always install an EventSensor on startup. There are
> a number of places where a non-nil eventQueue or keyboardBuffer instvar
> is used as some sort of discriminant for whether to behave as an event
> sensor or and oldstyle sensor - but it looks like an eventQueue is
> always installed on startup. There seems to be two places where events
> might be simulated from older prims. EventPollFrequency is set to 500
> every event fetch.
> The ioProcessLoop runs too many times if the VM signals the inputSemaphore
> for each event, since it runs to empty the vm queue for each signal. On
> platforms where calling the next event prim has to poll the OS for new
> events that can cost a good deal of time. Either we should call
> primGetNextEvent once per signal or we should flush the signals in the
> ioProcessLoop. Or fudge the vm to only signal once for each batch of
Google for "EventSensor polling" or similar. I have at least twice described
in detail what the best (=most efficient, least work and code) behavior
would be for EventSensor given the current VMs.
The short version is: Remove EVERYTHING that relates to buffering (incl.
event queue, ioProcess etc) and just use the get next event prim verbatim.
Period. Since all VMs use sufficient event buffers, there is zero reason to
do it in the image. All of this was done when there were no VMs with event
support and internal buffering and we had to cope with all of the variations
More information about the Squeak-dev