At 09:27 PM 2/2/2002, Scott A Crosby wrote:
On Sat, 2 Feb 2002, Rob Withers wrote:
Hi Scott,
It took me a little time to figure some of this out, and none of it was due to that aweful link you posted! ;-) When I search the web for this topic, most of what I get back is headache food.
Heh... That was just a teaser of some of the really fucked up stuff that's done with continuations.
Yes, well they seem to be very aggressive little beasties, at least to my mind. Is it just me who is confused about how to use them and apply them effectively? Somehow, I don't think so. :) I read a description of how to convert a function to CPS, and that sorta made sense, but I am not sure how that matches an OO style. OO being stateful and mutable, puts a damper on using this mechanism at the application level, for normal messaging.
It must be the functional style that is confusing me. I wasn't trying to reimplement the Processor with it, although it looks quite interesting. With the Future, I am attempting to create a point of synchronization for the arrival of a reply from an asynchronous message send, or a notification, and it's handler. This synchronization is between several processes (at least two), and can be repeatedly activated. Sometimes I want to be able to specify on which process the handler is executed, while other times I want it to run on a different process. Upon reflection, this is probably not the best place to use continuations. I am not unrolling a loop or rescheduling and context switching processes. Do continuations get used as a semaphore? This is what a Future models.
The trick with doing callCC in Squeak, is that we don't have callCC:, like your foo example. Instead we create a continuation around a block and not part of a #callCC: expression. Below is a fixed version of your foo/bar: example.
Someone else mentioned this to me. I'm still learning 'the smalltalk way'. :)
Sure, me too.:) I thought callCC would be doable (Continuation returningTo: thisContext sender sender), but that broke my Squeak!
The key to getting thi sto work was you example of error handling of
objects. With this example, you bail the first time through the setup method (#errorIn:handle:), when the continuation returns nil, as it does the first time. Here was that code:
In both cases, I do return upwards.. There's no way to avoid it. The UI
What does this term, return upwards, mean?
process never 'goes away'... An infinite loop can happen, but thats the exact same as: `[true] whileTrue.'
Here is a bare-bones SimpleContinuationFuture, that is inthe attached changeset.
Ewww! :)
Please. Realize that this is an object which really explicitly creates and invokes the continuation. The full ContinuationFuture, is closer to what I want, and is even worse!
What you want to do here is to use the fork/exit/yield methods I supplied. Handle the continuations as pseudoprocesses yourself, that way you don't have to put any trust in race-conditions on the processor scheduling.
Also, I'm unsure why you're using continuations at all.. Why not just stuff the aBlock into an instvar 'savedBlock', and do 'savedBlock value' in #value:?
Exactly right. It was an exploration into continuations that doesn't seem to offer offer any advantage. I will continue using DispatchQueues and blocks.
Cheers, Rob
On Sun, 3 Feb 2002, Rob Withers wrote:
At 09:27 PM 2/2/2002, Scott A Crosby wrote:
On Sat, 2 Feb 2002, Rob Withers wrote:
Hi Scott,
It took me a little time to figure some of this out, and none of it was due to that aweful link you posted! ;-) When I search the web for this topic, most of what I get back is headache food.
Heh... That was just a teaser of some of the really fucked up stuff that's done with continuations.
Yes, well they seem to be very aggressive little beasties, at least to my mind. Is it just me who is confused about how to use them and apply them effectively? Somehow, I don't think so. :) I read a description of
I think they're the thing that takes a few weeks to really know how to apply.
how to convert a function to CPS, and that sorta made sense, but I am not sure how that matches an OO style. OO being stateful and mutable, puts a damper on using this mechanism at the application level, for normal messaging.
Oh, not necessarily, you can do messaging with them too. If you use continuation-based multithreading, Basically, either the producer or consumer stuffs a continuation of themselves into a slot, the other, when it is ready to accept/recieve data, checks to see if there's a continuation. If its the consumer, it wakes up the producer, by putting its continuation on the queue. If its the producer,
If its the producer, it wraps the consumer continuation with a new one that passes the data to the consumer continuation, and then adds it to thequeue.
Processor enqueueNewContinuation: [:ignore | consumerCont theData]
BTW, I did quite bit of this simple stuff for a class about 3 years ago. The paper I referenced is beyond my level of background knowledge.
It must be the functional style that is confusing me. I wasn't trying to reimplement the Processor with it, although it looks quite interesting. With the Future, I am attempting to create a point of
In a general sense, you have to, you cn only have one continuation running at a time, so you have more than one, you need some notion of a processor queue to stuff the non-running ones into.
Multithreading isn't much other than keeping around suspended continuations and running them later. The problem comes in that this doesn't integrate too well if its seperate from the existing squeak Threading system, you need to switch over to to squeak Processes wholesale. (Barring one problem, the threading system is concurrent, the examples I give below *DO* have race conditions.)
You need to at least alter/enhance the semantics.. Something like:
Processor>>killCurrentContinuation. Processor>>enqueueNewContinuation: aContinuation
Where the first call never returns, and the second one enqueues a new continuation of your choice. Finally, if you invoke another continuation explicitly, you die.
Thus, you can reimplement new types of blocking, by, for example, taking your existing continuation, stuffing a copy into a 'suspendedContext' slot on some lock or other, then killing yourself. Later, the person who is supposed to wake you up pulls you out of the slot and enqueue's you.
Note that enqueueNewContinuation must have the semantics of a block. IE, you can take a continuation from callCC, and wrap it with a block before enqueuing it. I've done that frequently.
synchronization for the arrival of a reply from an asynchronous message send, or a notification, and it's handler. This synchronization is between several processes (at least two), and can be repeatedly activated. Sometimes I want to be able to specify on which process the handler is executed, while other times I want it to run on a different process. Upon reflection, this is probably not the best place to use continuations. I am not unrolling a loop or rescheduling and context switching processes. Do continuations get used as a semaphore? This is what a Future models.
Yep... I hope I've given a few clues for how to do it, assuming Processors have the above semantics.
Foo>>bydirSend: message "If no consumeris ready, suspend." consumerCont ifNil: [ [ :producerCont | waitingForConsumer _ produerCont. Processor killCurrentContinuation. ] waitingForConsumer _ nil. " A consumer exists. " result _ [ :cont | suspendedProducer _ cont Processor enqueueNewContinuation: [ :ignore | consumerCont message] ] callCC. suspendedProducer _ nil. ^result.
Foo>>byDirRecieve: message "Wake up a waiting producer... If any." waitingForConsumer ifNotNil: [ Processor enqueueNewContinuation: waitingForConsumer]. "Suspend ourself." message _ [ :cont | consumerCont _ cont. Processor killCurrentContinuation] callCC. "Ah, the message has arrived." .... do something with the message to compute a response .... "Wake up the produser with the result." Processor enqueueNewContinuation: [:ignore | suspendedProducer result].
---
Note that this is birdirectional send a message and get a response, it can be simplified to be a unidirectional message fairly easily. I do not handle the multiple-producer/multiple-consumer problem. That has a few deeper issues. You want to dispatch them as soon as possible, (not wait for a previous message to complete before the next one) but need to keep track that the right response gets bakc to the right continuation.
BTW, if you're interested in these schemes, CML (a package for SML-NJ) uses this type of message-passing to handle syncronization. Its very elegant and neat. In fact, CML has no shared global variables, *ALL* interthread syncronization must e done through uni-directional channels. (If you want a response, you send a pair containing the message, and teh channel through which you want the response sent.)
I once benchmarked CML with a variant fork-bomb at >40k threads (continuations) spawn/die/sec, with 10,000 of them active concurrently.
The trick with doing callCC in Squeak, is that we don't have callCC:, like your foo example. Instead we create a continuation around a block and not part of a #callCC: expression. Below is a fixed version of your foo/bar: example.
Someone else mentioned this to me. I'm still learning 'the smalltalk way'. :)
Sure, me too.:) I thought callCC would be doable (Continuation returningTo: thisContext sender sender), but that broke my Squeak!
Getting your hands on continuations are a slippery slope. Very very cool, but also can cause integration problems with legacyish systems. For example, the wrapped continuations above.. What would be the semantics if the block had a method-return? :)
Nominally, with tailcall thats not a problem.
The key to getting thi sto work was you example of error handling of
objects. With this example, you bail the first time through the setup method (#errorIn:handle:), when the continuation returns nil, as it does the first time. Here was that code:
In both cases, I do return upwards.. There's no way to avoid it. The UI
What does this term, return upwards, mean?
If you think of the continuation as a stack of 'to do next' things, you can't create a new one, only clone them, so all of them will have the a shared suffix. As you return from methods, you move upwards in the stack. There's no way, short of infinite loop, to avoid eventually getting back to the UI thread.
Please. Realize that this is an object which really explicitly creates and invokes the continuation. The full ContinuationFuture, is closer to what I want, and is even worse!
Heh. :) What're the semantics of a ContinuationFuture?
What you want to do here is to use the fork/exit/yield methods I supplied. Handle the continuations as pseudoprocesses yourself, that way you don't have to put any trust in race-conditions on the processor scheduling.
Also, I'm unsure why you're using continuations at all.. Why not just stuff the aBlock into an instvar 'savedBlock', and do 'savedBlock value' in #value:?
Exactly right. It was an exploration into continuations that doesn't seem to offer offer any advantage. I will continue using DispatchQueues and blocks.
In this case, I'd probably say that they don't offer much.. The existing Processor appears to be an approximation of what you want out of em anyways. And the concurrency it has causes race conditions when using them explicitly.
If you want full use of scheme-type continuations, Processor might have to be somewhat retrofitted. Hell, it may even be useful for everyone else... You only yield the CPU at yield points. Typically, there's one implicit yield point at the top of every method, and at the target of any backwards branch. (IE, any potential loop.) At yield points, it checks to see if the timer has run out and yields explicitly then. Explicit yield points may be added elsewhere.
Normally, the timer interrupt sets a flag that is checked at all yield points. Code could disable the timer interrupt from setting that flag to avoid being preempted during any delicate interruption.
Special code, like continuation-handling or mutex code would explicitly disable the timer interrupt. Another option is to have no implicit yield points at all. We don't lose that much, the image is basically single-threaded anyways.
Scott
squeak-dev@lists.squeakfoundation.org