I did look at using pthread_delay_np to delay the heartbeat thread as my thought was if the image is sleeping why wake up to service the clock, etc. 
Difficult to measure the outcome, but one should consider that option too. 

On Thu, Feb 12, 2015 at 10:55 AM, Eliot Miranda <eliot.miranda@gmail.com> wrote:
 


On Thu, Feb 12, 2015 at 10:45 AM, John McIntosh <johnmci@smalltalkconsulting.com> wrote:
 
Craig so how does using pthread_cond_timedwait affect socket processing? The promise of nanosleep was to wake up if an interrupt arrived say on a socket (Mind I never actually confirmed this the case, complete hearsay...) 

+1.  What he said.  The problem with pthread_cond_timed_wait, or any other merely delaying call is that, unless all file descriptors have been set up to send signals on read/writability and unless the blocking call is interruptible, the call may block for as long as it is asked, not until that or the read/writeability of the file descriptor.

IMO a better solution here is to a) use epoll or its equivalent kqueue; these are like select but the state of which selectors to examine is kept in kernel space, so the set-up overhead is vastly reduced, and b) wait for no longer than the next scheduled delay if one is in progres.


Of course, the VM can do both of these things, and then there's no need for a background process at all.  Instead, when the V< scheduler finds there's nothing to run it calls epoll or kqueue with either an infinite timeout (if no delay is in progress) or the time until the next delay expiration.

Now, if only there was more time ;-)

It strikes me that the VM can have a flag that makes it behave like this so that e.g. some time in the Spur release cycle we can set the flag, nuke the background process and get on with our lives.



On Thu, Feb 12, 2015 at 2:40 AM, Craig Latta <craig@netjam.org> wrote:


Hoi Norbert--

     In 2003, while implementing remote messaging for what became the
Naiad distributed module system[1], I noticed excessive CPU usage during
idle by Squeak on MacOSX (and extremely poor remote messaging
performance). I prepared alternate versions of
ioRelinquishProcessorForMicroseconds, comparing:

-    select() (AKA aioSleepForUsecs in Ian's aio API, my starting point)
-    pthread_cond_timedwait()
-    nanosleep()

     pthread_cond_timedwait was the clear winner at the time. I wrote my
own relinquish primitive as part of the Flow external streaming
plugin[2], and I've been using it ever since. Still seems fine. I've
mentioned this before.


     thanks,

-C

[1] http://netjam.org/naiad
[1] http://netjam.org/flow

--
Craig Latta
netjam.org
+31 6 2757 7177 (SMS ok)
+ 1 415 287 3547 (no SMS)




--
===========================================================================
John M. McIntosh <johnmci@smalltalkconsulting.comhttps://www.linkedin.com/in/smalltalk
===========================================================================




--
best,
Eliot




--
===========================================================================
John M. McIntosh <johnmci@smalltalkconsulting.comhttps://www.linkedin.com/in/smalltalk
===========================================================================