[Goodie] async message queues (was RE: passing cmds up from the VM, was pending mac vm 3.2.4)

Rob Withers rwithers12 at mediaone.net
Thu Feb 21 13:51:23 UTC 2002


At 11:33 AM 2/20/2002, Lex Spoon wrote:

> > Lex, if you want to use the existing EventSensor, we could just make a new
> > event-type (4) for callback events.  Then again there may be a better way
> > to go.  David told us about having async message queues, with which you
> > stick in a message (from c-code) and it does some semaphore magic so that
> > squeak would ultimately 'execute' the message.  Aside from the object
> > memory issue, variants of this mechanism should handle your cases below,
> > depending on how we manage the callbacks in the c-code.
>
>Notifying the VM is the easy part.  You could even just not have events
>at all, but a single semaphore that some process is waiting on.  Using
>an event queue is probably nice, though it does seem odd to me to
>intermix "mouse moved" events in the same queue as "incoming method
>call".  Some Smalltalk-level process is just going to pass these events
>in separate directions I would think.

It would have to go through one 'queue' to sync the threads in the VM, but 
not necessarily the EventSensor queue.  We have one vm queue which feeds 
MessageSends into the interpreter (as well as another queue for 
returns).  These method calls are themselves enqueuing MessageSends onto 
specific destinationQueues, which were previously registered.  Each object, 
which registers handlers, would have it's own queue and serialize access to 
itself this way.   This is the whole point of the dispatch queues that I 
posted.  Note that if you set a DispatchQueue to #foreground, it uses the 
deferredUIActions queue.  Mind that my DispatchQueues are just food for 
thought, since they are image-based, and not the VM machinery.  But, it 
shouldn't be too hard to enqueue a

         MessageSend (destQueueOop, #nextPut:, #(
                 MessageSend(handlerOop, handlerSelectorOop, #(
                         functionCallObjectOop(funcNameOop, argvOop, 
argcOop, futureOop)))))

into a reentrantQueue feeding the interpreter.    The VM would send #value 
to the top-level MessageSend.  The futureOop would know how to enqueue the 
result back to the VM and would include a requestId to match with the 
blocked calling thread.  The key to the futureOop is that it is created 
when the threaded call is dispatched into the interpreter, so it 
encapsulates the info needed to match the original request.


>But again, it's a small part of the issue.  The reamining problems are:
>
>         1. Finding the right handler for the particular callback (You 
> talked a
>little about this -- sounds good to me!).

I am not sure how to generate c-code that is invokable with a variable 
number of arguments, but I seem to recall that there is a way.  For each 
handler registration, we would malloc a functionPtr, which would go through 
some glue code to marshall the arguments as Smalltalk objects.  it would be 
awfully nice to have FixedSpace objects, for external structures we don't 
want to copy.  Perhaps we already have this in FFI?

>         2. Providing a way to get any parameters into the 
> image.  Probably the
>data is stuffed into a buffer that can then be read by some primitive
>(rememebering, though, to read from the *correct* buffer).  Also, there
>is an issue of C -> Smalltalk translation of things like int's and
>strings.

I would personally prefer if we could instantiate classes in the glue code, 
with a thread safe instantiateClasssizefill...(), but it may be easier to 
wait until we are in the image.  The advantage to creating them in the glue 
code is that they would then be queued and executed immediately, without 
further shenanigans.  We must try to avoid those shenanigans.


>         3. Making it possible to re-enter interpret() .
>
>         4. Adding a way to *return* from interpret().  A complexity is that
>multiple callbacks could be executing simultaneously and then return in
>the wrong order -- thus you need to be able to leave a return in limbo
>while some other callback is still processing.  Also, we might want to
>put in a dummy check that no one returns from the top-level interpret()
>call.  :)
>
>         5. Of course, leaving a way for the "return" from Smalltalk code to
>pass some data back.  This involves a Smalltalk->C translation.  And
>don't forget that this data might get stored in a side buffer, if the
>return from Smalltalk is in limbo.


I am not clear by what you mean for #4.  Why is it an issue that they 
return in the a different order than they were called?  One calls return 
should not interfere with another's call.

For #3 and #5, odds are I again don't completely understand the problem, 
but the key seems to be the correct function pointer and glue code from #1 
above, along with a future.   The 're-enter' would be a MessageSend 
enqueued into the incomingVMQueue, and the 'return' would be a well defined 
structure, enqueued into the outgoingVMQueue.
                         ___________________________
  _____________|      threaded function call glue  |
|   interpreter     |         (handler registration,        |
|                       |         data marshalling,            |
|                       |         suspended call pool)       |
|                     _|__                                          |
|                    ____<< MessageSendOop          |
|                     _|__                                          |
|                    ____>>  ResultOop                     |
| _____________|                                             |
                         |__________________________|

where
MessageSendOop is sent #value in the interpreter, which requeues the inner 
call to the handlers queue
ResultOop has (reqId, returnType, data)  and is unpacked and marshalled 
before unblocking the calling thread with the value.


Wouldn't you like to hear David Simmons take on how this was done in 
SmalltalkAgents, if he's read this far...I would  :)



>Overall, I guess none of these is a *huge* deal, but I can see why
>Andreas would run into time trouble with it.  :)  It would really expand

Yes, absolutely.  It was enough that he did the acceleration support in one 
shot, right?

>the things Squeak can be used for, however, because now it can interact
>with a broader class of existing code.  In fact, you could even write a
>Squeak library, which is kinda neat IMHO.

Like maybe provide direct call support for a mod_squeak?  Although I 
believe it is the ninja web server which is supposed to perform better 
(it's a related project of ninja - Matt Welch).  Guess what, they do it 
with many, many call queues, to spread around the latency, due to high 
volume.  This would also help allow for threaded callout, which would also 
be _quite_ useful.  I don't know how this would all work on a non-threaded 
platform.

cheers,
Rob




More information about the Squeak-dev mailing list