Peter Smet wrote:
Thanks for pointing that out. Now that you have, I'm curious about the last part. Since an object makes such minimal assumptions about dependents, why is update synchronizing so important?. A model shouldn't even really know whether it has 0, 1 or more views, so why does it care about their update times? I can see from an extreme viewpoint that a 2 hour update delay could cause problems with views not representing their models correctly. However, if the view contents had the same (asynchronous) timing as mouse and keyboard events, I don't see a problem. You could get situations where the view did not represent the model quite accurately, but since they are meant to be 'loosely coupled' (whatever that means) this should not (in theory) cause devastating effects.
The loose coupling refers to code linkage, not event order dependency, or even delivery timing. The objective is to be able evolve the publisher and subscriber separately. It's entirely reasonable for a dependent to expect its events to be received in the order generated. If events are reordered before delivery, dependents will break.
Beyond that, some systems are time-coupled -- the subscriber receives an event, then looks at the publisher for more information. This is pretty common in Smalltalk. If the publisher's state doesn't match the event, the subscriber will get confused. Breaking this constraint will require copying more data when events are generated, so that the subscriber has enough context to do its processing without looking at the delivery-time state of the publisher.
Is it worth it? Yes, in a distributed system. In a local image-based system, it's a substantial extra overhead.
Summary: 1. Any subscription system should guarantee event ordering.
2. I'd expect legacy event handling code to break if you shift to asynchronous delivery.
When I get a bit of time I might get Stephen's asynchronous messaging stuff, attempt to plug it into my PostOffice, and see which things stuff up. With the exceptions, there doesn't seem any way this could be done asynchronously (without serious effects on program reliability), but I will go away and have a think about that too. Maybe if exceptions were given the right priority in the event and thread queue things could hang together???
You could make it work with enough copying and glue, but why bother? The key question here is: who handles the exception? If a process raises an exception, it's got to stop executing until the exception's been handled. Scoped event handlers are better than global event handlers, so handlers will be created as a process executes.
If I want to activate the handler via an event, I've got to capture the context of the signal and forward via event queue to the handler set up by the faulting process. Why take the detour through the event queue?
-dms