On Fri, Mar 6, 2015 at 10:47 PM, Craig Latta email@example.com wrote:
Apologies, my newsreader's thread database got trashed, and I
missed the responses to my previous message until now.
John McIntosh writes:
Craig so how does using pthread_cond_timedwait affect socket processing?
It makes it actually work well. :) This was the whole point of
using pthread_cond_timedwait. Please read the manpage at . It waits until either a condition is met (hence the "cond") or a timeout elapses.
In the Flow virtual machine plugin, I have a
synchronizedSignalSemaphoreWithIndex function that calls the usual signalSemaphoreWithIndex provided by the virtual machine, and also sets the activity condition that the relinquish primitive cares about. The host threads which service external I/O requests from primitives use synchronizedSignalSemaphoreWithIndex when signalling the semaphores on which Smalltalk-level code is waiting. This includes not only the semaphores for reading and writing sockets, but also those for activities with other external resources entirely, like MIDI ports.
So you get a generalized scheme which is not tied to the arcana of
any particular kind of external resource, and it works the same way on any platform which supports the POSIX API (which now is all the Unix-ish ones). This has seemed the obvious way to go for over ten years now.
Until I implemented this scheme, remote messaging throughput (and
MIDI throughput) was horrible. Believe me, I tried all the other schemes that everyone has mentioned in the Squeak community and its descendants since 1996, and none of them were anything better than deeply embarrassing.
From the Flow plugin, check out flow.c, which implements
synchronizedSignalSemaphoreWithIndex, the activity condition, and the relinquish primitive, and ip.c which creates host threads to do background work for external resource primitives and uses synchronizedSignalSemaphoreWithIndex to coordinate with the Smalltalk-level code and the relinquish primitive.
It's so frustrating and weird that we're still talking about this
The promise of nanosleep was to wake up if an interrupt arrived say on a socket (Mind I never actually confirmed this the case, complete hearsay...)
Right, nanosleep promises this and doesn't deliver on MacOS, so I
say forget it. pthread_cond_timedwait works as advertised on MacOS and Linux (all distros).
+1. What [John] said.
...except John admitted himself that he hadn't verified his
suggestion, and you both assumed for some reason that I didn't have the same goals in mind.
The problem with pthread_cond_timed_wait, or any other merely delaying call...
But pthread_cond_timedwait is *not* a "merely delaying call". It
does exactly what we want (wait until *either* a condition is met or a timeout elapses), and it actually works, and the code is the same across POSIX platforms.
What you go on to say is based on a false premise.
...is that, unless all file descriptors have been set up to send signals on read/writability and unless the blocking call is interruptible, the call may block for as long as it is asked, not until that or the read/writeability of the file descriptor.
In the scheme I described above, we can do what we need without
using formal Unix signals at all (happily avoiding that whole can of worms). The notion of interruptible blocking calls is a red herring generally. All the blocking calls in Flow happen in host threads which are decoupled from any function call a Smalltalk primitive would make.
IMO a better solution here is to a) use epoll or its equivalent kqueue; these are like select but the state of which selectors to examine is kept in kernel space, so the set-up overhead is vastly reduced, and b) wait for no longer than the next scheduled delay if one is in progres.
I claim they are not better solutions, because they don't work for
all kinds of external resources (e.g., MIDI ports). Also, I found that "waiting for no longer than the next scheduled delay" is often still far too long, when there is external resource activity before that time comes.
Of course, the VM can do both of these things, and then there's no need for a background [Smalltalk] process at all. Instead, when the VM scheduler finds there's nothing to run it calls epoll or kqueue with either an infinite timeout (if no delay is in progress) or the time until the next delay expiration.
This would still leave us with poor performance when using new
kinds of external resources that don't use selectors. (That is, the external resource access would perform poorly; I'm sure the main virtual machine would scream right along, blissfully oblivious to it all. :)
It strikes me that the VM can have a flag that makes it behave like this so that e.g. some time in the Spur release cycle we can set the flag, nuke the background process and get on with our lives.
If the only external resources in our lives were selector-using
ones, I might agree.
(Sorry for this late response. I discovered it sitting in my Draft folder.)
Finding this an interesting topic, I googled around to learn more and bumped into a few things maybe of random interest for some.
* Condition variables performance of boost, Win32, and the C++11 standard library https://codesequoia.wordpress.com/2013/03/27/condition-variables-performance...
* pthread_cond_timedwait behaving differently on different platforms http://blogs.msdn.com/b/cellfish/archive/2009/09/01/pthread-cond-timedwait-b...
* pthread-win32 pthread_cond_timedwait is SLOW? http://comp.programming.threads.narkive.com/fZU5gh0K/pthread-win32-pthread-c...
* Fast Event Processing in SDL (since Pharo is getting SDL) http://gameprogrammer.com/fastevents/fastevents1.html