If anyone still things that calling gettimeofday() multiple thousand times per second is a good idea, here is some top output:
Cees have you tried my [ENH] relinquishProcessorForMicroseconds: change? That might improve the CPU usage.
Also one of the solutions offered for unix is to use the ITIMER support to have a sig alarm every 1/50 of a sec to update the low resolution clock that is used in the primitive dispatch logic. That isn't turned on my default, for a reason I've not heard yet.
On the mac I did try that but had some issues with doing profile sampling so I migrated to a pthread implementation where we do a wait for 16 milliseconds on a semaphore. The accuracy here isn't a big concern, rather we want to know if a larger number of milliseconds have passed, versus a few.
Is it possible for you to try either of these solutions?
John M McIntosh johnmci@smalltalkconsulting.com said:
Cees have you tried my [ENH] relinquishProcessorForMicroseconds: change? That might improve the CPU usage.
No - thanks, I'll browse for it.
Also one of the solutions offered for unix is to use the ITIMER support to have a sig alarm every 1/50 of a sec to update the low resolution clock that is used in the primitive dispatch logic. That isn't turned on my default, for a reason I've not heard yet.
Neither do I. I have been using an ITIMER-based VM for the last couple of weeks, and - as I said before - it seems snappier. Also, the only thing that really breaks is profiling, but how many people use it? OTOH, how many people run Squeak in user mode Linux?
I still have to install the itimer VM on the UML environment and see how much better it is. Didn't have time for that yesterday (nor today, nor tomorrow...) and I mainly wanted to falsify the statement that calling gettimeofday() x-thousand times per second doesn't impact system load ;-)
I still have to install the itimer VM on the UML environment and see how much better it is. Didn't have time for that yesterday (nor today, nor tomorrow...) and I mainly wanted to falsify the statement that calling gettimeofday() x-thousand times per second doesn't impact system load ;-)
You might want to try a pthread implementation too. I'm not sure about the impact of the sig alarm on system calls and how good the error checking is for that condition thruout the unix vm.
John M McIntosh johnmci@smalltalkconsulting.com said:
You might want to try a pthread implementation too. I'm not sure about the impact of the sig alarm on system calls and how good the error checking is for that condition thruout the unix vm.
I'll probably rather solve the problem once and for all by creating a memory-based implementation of gettimeofday. Shouldn't be too hard, and it's reasonably easy to redirect system calls under Linux to your own implementations (a module in the kernel and a custom lib in userspace). Just have to scrape the rust of my Linux kernel hacking (my last patch to the kernel is probably dated '93 ;-))...
On 14 May 2002, Cees de Groot wrote:
Also one of the solutions offered for unix is to use the ITIMER support to have a sig alarm every 1/50 of a sec to update the low resolution clock that is used in the primitive dispatch logic. That isn't turned on my default, for a reason I've not heard yet.
Neither do I. I have been using an ITIMER-based VM for the last couple of weeks, and - as I said before - it seems snappier. Also, the only thing that really breaks is profiling, but how many people use it?
Expiration of an interval timer will interrupt a select() whose timeout is nonzero. If you turn on the ITIMER-based low-res clock you might be capping (or extending) the time aioPoll() will wait in select(). This depends on exactly what your particular flavour of Unix leaves in the timeval when select() exits early due to EINTR. If it's updated to reflect the amount of time not slept then things will work normally. If it isn't modified at all then select() will wait indefinitely for i/o activity whenever the timeout passed to aioPoll() is larger than the resolution of the low-res clock. If it is modified to reflect the amount of time slept then you're effectively capping the maximum time spent in aioPoll() (and hence, by transitivity, in ioRelinquishProcessor()) to be no more than twice the resolution of the low-res clock.
FWIW, the only thing that IEEE Std 1003.1-2001 (aka Posix) has to say about this is:
On failure, the objects pointed to by the readfds, writefds, and errorfds arguments shall not be modified.
IOW, the value of timeout is undefined after EINTR. sigaction()ing a restart for SIGALRM doesn't help either. According to Posix:
If SA_RESTART has been set for the interrupting signal, it is implementation-defined whether select() restarts or returns with [EINTR].
OTOH, how many people run Squeak in user mode Linux?
OTOH, how many people run Squeak on non-Linux systems?
I suppose it could be made conditional on whether the host is Linux or aioPoll() could be made to cooperate with the low-res clock to recompute the timeval on each EINTR, but all this might become academic soon. In order to do full-duplex sound reliably on OSS (read: Linux) periodic real-time SIGALRMs are an absolute necessity, so (a rather different and slightly more complex version of) the ITIMER-based low-res clock is scheduled to appear Real Soon Now and aioPoll() will be modified as necessary to cope with this. (The resolution will have to be 1/100 sec, not 1/50 sec, but I doubt that's a biggie.)
system call in user-mode Linux is a context switch, and the result of all the gettimeofdays() is that the processor runs hot
In the meantime, a bag of ice cubes balanced on top of your processor might help. (Tip-of-the-week: It's better if the bag doesn't leak.)
Regards,
Ian
PS: Anyone who has encountered an early-revision Acorn "Beeb" motherboard might fondly remember having to balance bags of ice cubes on top of the video chip during the summer months to achieve a stable display. ;)
PPS: Anyone who has a Unix machine on which the maximum ITIMER_REAL resolution is > 1/100 sec, please tell me now!! (Otherwise be prepared to suffer forever from broken sound in Squeak.)
Ian Piumarta ian.piumarta@inria.fr said:
system call in user-mode Linux is a context switch, and the result of all the gettimeofdays() is that the processor runs hot
In the meantime, a bag of ice cubes balanced on top of your processor might help. (Tip-of-the-week: It's better if the bag doesn't leak.)
Thanks. But running the itimer-based version helped a lot better ;-)
The only thing left to solve is why Squeak kept exiting on the UML box earlier this week...
squeak-dev@lists.squeakfoundation.org