gettimeofday() revisited
Cees de Groot
cg at cdegroot.com
Mon May 13 22:18:23 UTC 2002
John M McIntosh <johnmci at smalltalkconsulting.com> said:
>Cees have you tried my [ENH] relinquishProcessorForMicroseconds:
>change? That might improve the CPU usage.
>
No - thanks, I'll browse for it.
>Also one of the solutions offered for unix is to use the ITIMER
>support to have a sig alarm every 1/50 of a sec to update the low
>resolution clock that is used in the primitive dispatch logic. That
>isn't turned on my default, for a reason I've not heard yet.
>
Neither do I. I have been using an ITIMER-based VM for the last couple
of weeks, and - as I said before - it seems snappier. Also, the only
thing that really breaks is profiling, but how many people use it? OTOH,
how many people run Squeak in user mode Linux?
I still have to install the itimer VM on the UML environment and see how
much better it is. Didn't have time for that yesterday (nor today, nor
tomorrow...) and I mainly wanted to falsify the statement that calling
gettimeofday() x-thousand times per second doesn't impact system load ;-)
--
Cees de Groot http://www.cdegroot.com <cg at cdegroot.com>
GnuPG 1024D/E0989E8B 0016 F679 F38D 5946 4ECD 1986 F303 937F E098 9E8B
More information about the Squeak-dev
mailing list
|