gettimeofday() revisited

Ian Piumarta ian.piumarta at inria.fr
Thu May 23 15:49:03 UTC 2002


On 14 May 2002, Cees de Groot wrote:

> >Also one of the solutions offered for unix is to use the ITIMER 
> >support to have a sig alarm every 1/50 of a sec to update the low 
> >resolution clock that is used in the primitive dispatch logic. That 
> >isn't turned on my default, for a reason I've not  heard yet.
> >
> Neither do I. I have been using an ITIMER-based VM for the last couple
> of weeks, and - as I said before - it seems snappier. Also, the only
> thing that really breaks is profiling, but how many people use it?

Expiration of an interval timer will interrupt a select() whose timeout is
nonzero.  If you turn on the ITIMER-based low-res clock you might be
capping (or extending) the time aioPoll() will wait in select().  This
depends on exactly what your particular flavour of Unix leaves in the
timeval when select() exits early due to EINTR.  If it's updated to
reflect the amount of time not slept then things will work normally.  If
it isn't modified at all then select() will wait indefinitely for i/o
activity whenever the timeout passed to aioPoll() is larger than the
resolution of the low-res clock.  If it is modified to reflect the amount
of time slept then you're effectively capping the maximum time spent in
aioPoll() (and hence, by transitivity, in ioRelinquishProcessor()) to be
no more than twice the resolution of the low-res clock.

FWIW, the only thing that IEEE Std 1003.1-2001 (aka Posix) has to say
about this is:

	On failure, the objects pointed to by the readfds, writefds,
	and errorfds arguments shall not be modified.

IOW, the value of timeout is undefined after EINTR.  sigaction()ing a
restart for SIGALRM doesn't help either.  According to Posix:

	If SA_RESTART has been set for the interrupting signal, it is
	implementation-defined whether select()	restarts or returns with
	[EINTR].

> OTOH, how many people run Squeak in user mode Linux?

OTOH, how many people run Squeak on non-Linux systems?

I suppose it could be made conditional on whether the host is Linux or
aioPoll() could be made to cooperate with the low-res clock to recompute
the timeval on each EINTR, but all this might become academic soon.  In
order to do full-duplex sound reliably on OSS (read: Linux) periodic
real-time SIGALRMs are an absolute necessity, so (a rather different and
slightly more complex version of) the ITIMER-based low-res clock is
scheduled to appear Real Soon Now and aioPoll() will be modified as
necessary to cope with this.  (The resolution will have to be 1/100 sec,
not 1/50 sec, but I doubt that's a biggie.)

> system call in user-mode Linux is a context switch, and the result of
> all the gettimeofdays() is that the processor runs hot

In the meantime, a bag of ice cubes balanced on top of your processor
might help.  (Tip-of-the-week: It's better if the bag doesn't leak.)

Regards,

Ian

PS: Anyone who has encountered an early-revision Acorn "Beeb" motherboard
    might fondly remember having to balance bags of ice cubes on top of
    the video chip during the summer months to achieve a stable
    display. ;)

PPS: Anyone who has a Unix machine on which the maximum ITIMER_REAL 
     resolution is > 1/100 sec, please tell me now!!  (Otherwise be
     prepared to suffer forever from broken sound in Squeak.)





More information about the Squeak-dev mailing list