[cross-posted to e-lang and squeak-e]
Last January 27, Marc Stiegler and I did a presentation on CapTP for the OMG. The presentation is available from the OMG website at http://www.omg.org/cgi-bin/doc?mars/2003-01-13 or from erights.org at http://www.erights.org/talks/captp4omg/index.html .
Most of talk was spent trying to explain why distributed cryptographic capabilities are the right way to do secure distributed objects. I'd say this part went rather well. We even got some enthusiastic discussion of how the OMG might consider a Corba with cryptographic capabilities *instead* of Corba's current security. I was surprised at how plausible it seemed that this might really happen, but I have no idea what the actual politics might be.
Unfortunately, but not surprisingly, we spent the entire time on the security aspects of CapTP and none on the concurrency control. In preparing the talk, I was quite happy when I drew http://www.erights.org/talks/captp4omg/captp4omg/sld019.htm to explain the concurrency control, though I never got that far in the talk. It's nicely complementary to the earlier slide http://www.erights.org/talks/captp4omg/captp4omg/sld012.htm , from the Ode, which I used to explain CapTP's security.
So here's an attempt at a compact explanation of the distributed event loop part of E's concurrency control using this one slide. It doesn't say anything I haven't said elsewhere, but I do think it makes the picture as a whole clearer.
This shows each vat as having an L-shaped data structure recording what remaining computations still need doing in that vat. The green blocks are stack frames, and the vertical tower of green blocks is the stack. As is traditional, the stack is shown upside down, with the top-of-stack at the bottom. The purple blocks are pending deliveries -- a record of the need to deliver a given message to a given receiver. The horizontal row of purple blocks is the pending delivery queue, ie, the event queue.
Computation in each vat proceeds only at its current top-of-stack.
An immediate call (".") pushes a new green block to the top of stack. Since "." can only be performed on a NEAR (intra-vat) reference, the green block gets added to the stack of the calling vat.
An eventual send ("<-") enqueues a new purple block to the back of the event queue of the vat hosting the receiver.
We see that Alice is currently executing in VatA, since VatA's top-of-stack points at her as receiver. In step (1), Alice executes "bob <- foo(carol)". In step (2) we see the result -- a record of the need to deliver "foo(carol)" to Bob is enqueued on VatB's queue, since Bob resides in VatB.
Unshown is step (3), when computation in VatB advances till this record is at the front of the queue, whereupon it becomes the initial stack frame of a new stack, at which point Bob actually receives the message.
The remaining major concepts of E's concurrency control are promises and pipelining, for which I use the graphic y'all have already seen http://www.erights.org/talks/captp4omg/captp4omg/sld020.htm (thanks Darius!). And the whole when/catch, __whenMoreResolved thing, which currently has no pictures but needs them.
---------------------------------------- Text by me above is hereby placed in the public domain
Cheers, --MarkM
squeak-e@lists.squeakfoundation.org