[squeak-dev] Re: Future examples (Re: Inbox: #future keyword for asynchronous message invocation)

Igor Stasenko siguctua at gmail.com
Fri Dec 18 10:35:51 UTC 2009


2009/12/18 Andreas Raab <andreas.raab at gmx.de>:
> A combined response:
>
> Stephen Pair wrote:
>> What is the scope of this safety with respect to shared state and
>> mutation?  Are you essentially treating the entire image as a vat (in
>> the E sense)?  Is it per morphic project?  Or something else?
>
> It depends on the implemenation. Josh's version is deliberately simple to
> get people accustomed to the idea and uses Project>>addDeferredUIMessage:
> for delivery. We'll definitely want to improve on that and introduce
> separate event loops and possibly even VATs but I think going slowly is
> advantageous because it helps people to learn.
>
> Colin Putney wrote:
>> [regarding the bug] I'll take that as a challenge. :-) I assume that If
>> the mouse button is released then keepScrolling will be set to false.  If
>> the button is then pressed again before the delay wakes up, you'll have two
>> processes doing the scrolling. It would seem that they'd cause scrolling to
>> happen twice as fast.
>
> Good one! But this could conceivably be fixed by implementing
> #finishedScrolling properly (for a certain meaning of "properly"). The bug
> I'm referring to is that in
>
>   self future setValue: (value + scrollDelta + 0.000001 min: 1.0).
>
> we're reading both value and scrollDelta without synchronization. Properly
> written this should look more like:
>
>   [self setValue: (value + scrollDelta + 0.000001 min: 1.0)] future value.
>
> to defer binding value and scrollDelta but I'm sure you can see why I'm
> preferring the alternative for illustrating the concept. It does point an
> interesting property of future messages namely that arguments are bound
> early by default. This is a question of choice but I find it in practice to
> be advantageous because the cases where you'd want to late bind them are
> relatively rare.
>
> Bert Freudenberg wrote:
>> Do you have experience with using alternate main loops? Like, in a server
>> where you might want to block the UI thread, we still might want to drain
>> the future queue.
>
> Yes. We use EventLoop instances to encapsulate processes and their message
> queues. Those run in separate processes and communicate via future messages.
> There is *lot* of those in our servers (thus the point about lock-free
> communication; regardless where you are you can always just fire a foo
> future bar without worrying about locking etc).
>
> Igor Stasenko wrote:
>> Hmm.. can't see how futures helping to deal with concurrency. Unless
>> there some details which i don't see.
>> A semantics of 'future' is guarantee that message will be sent
>> eventually in future, but there is no need to guarantee that this
>> message order will be preserved e.g.:
>
> Messages delivered from the same unit of concurrency (A) and being sent to
> the same unit of concurrency (B) are ordered. As a consequence a series of
>        self future foo.
>        self future bar.
>        self future baz.
> is always well-ordered (foo first, then bar, then baz). For the example (all
> messages inside a single concurrency unit) the ordering is even more strict
> than that: bar and baz *will* get executed before any future messages sent
> from executing foo.
>
>> And if some other code poking with your data and interrupted to handle
>> future message send, you still might need to use a synchronization, if
>> both accessing same state.
>
> Yes, without stronger encapsulation (which we use for example in Croquet
> islands) there is still the chance to introduce "accidental sharing" (just
> as illustrated in the bug above). However, the main advantage is that in
> practical situation it's *always* safe to just use "self future foo" if you
> don't know whether the code you're is executed from the same concurrency
> unit or not. Classic example: If you don't know if the logging code can be
> executed from some other thread, just use "Transcript future show: 'Hello
> World'". This is safe no matter if you run it from a background process or
> from the Morphic UI process.
>
i don't know much about implementation detail, but
binding every future sends to a single process - Morphic UI process
(and in this way you of course relaxing the concurrency problems)
creates a potential bottleneck, since all future sends is using single
thread for evaluation , means that:

foo future fooz.
bar future baz.

will unable to run concurrently, despite the potential possibility to
safely run without interference with each other.
So a given scheme ends using a single global synchronization semaphore
to handle all future sends, and that's not very good.

But i agree, its simple, yet it far from being useful for employing a
highly scalable concurrent schemes.

> Cheers,
>  - Andreas
>
>



-- 
Best regards,
Igor Stasenko AKA sig.



More information about the Squeak-dev mailing list