Replacing Semaphores with Mesa Monitors
Andrew P. Black
black at cse.ogi.edu
Thu May 13 22:35:34 UTC 2004
At 9:22 -0700 2004.5.13, John M McIntosh wrote:
>If one messes greatly with the semaphore signaling in the VM you
>should examine the code carefully. On os-9 semaphore signaling can
>occur on another thread (Open transport network processing), thus
>leading to a race condition on whatever is being managed. Right now
>the interpreter thread reads/process a queue of pending semaphore
>signals, where as plugins & other threads can put signals on a
>different queue. Then things get switched when the VM again
>processes pending signals. Although this is not perfect, it has a
>very small window of failure. Unlike the original code many years
>back that allowed concurrent unmanaged access between different OS
>processes. Perfect code would require usage of a hosting OS mutex
>semaphore construct.
I'm aware that this is a dangerous area to mess with - which is why I
was hoping not to. But I don't really understand these comments. Is
there a document that describes threading in the VM in more detail?
My naïve thinking was that Squeak runs in a single OS thread, and
therefore there is no real concurrency, and that the interpreter main
loop was never preempted. There are signals from the OS, however,
indicating things like data on a socket. I imagined that these got
queued up somehow, and that the interpreter main loop would check the
queue each cycle. Alternatively, I suppose it would be possible to
let the OS signal handlers directly execute the semaphore primitives,
but then one would have to do really hairy stuff to make sure that
the semaphore primitives were not made non-atomic with respect to the
signal handler executions. Is this the problem that you are
referring to?
Andrew
More information about the Squeak-dev
mailing list
|