Hi,
I've implemented a SharedQueue on top of Nile which protects against concurrent access. However, #nextPutAll: is not protected and I would like a test that shows it's not.
My current test pass but it should not.
testUsingNextPutAll "This test should not pass. Unfortunately, I'm not skilled enough in processes management to make this test fail. In fact, since #nextPutAll: is not protected, numbers should not be sorted the way they are at the end.'" |queue writingBlock| queue := self newQueue. writingBlock := [queue nextPutAll: (1 to: 10000)]. writingBlock fork; fork; fork. Processor yield. self assert: (queue next: 10000) asArray = (1 to: 10000) asArray. self assert: (queue next: 10000) asArray = (1 to: 10000) asArray. self assert: (queue next: 10000) asArray = (1 to: 10000) asArray. self assert: queue atEnd.
I assume it's because when I fork, the work is done before the following fork starts.
Can somebody help me writing this test?
On Friday 25 May 2007 10:00 pm, Damien Cassou wrote:
Hi, testUsingNextPutAll "This test should not pass. Unfortunately, I'm not skilled enough in processes management to make this test fail. In fact, since #nextPutAll: is not protected, numbers should not be sorted the way they are at the end.'"
|queue writingBlock|
queue := self newQueue. writingBlock := [queue nextPutAll: (1 to: 10000)]. writingBlock fork; fork; fork. Processor yield. self assert: (queue next: 10000) asArray = (1 to: 10000) asArray.
Damien,
I notice two issues with your code. First, If the code in block runs fast, then it is quite possible for one to finish its execution before the next one gets scheduled. Interleaving is required only if a process hogs the processor for too long. So it is quite possible for the array to contain three runs of 1:10000 in proper sequence. This is not an error.
Secondly, yielding just lets background processes run, but does not guarantee that they have terminated. These processes have to signal the main process waiting in the foreground that they are done and then you can proceed to test the queue.
Hope this helps .. Subbu
2007/5/26, subbukk subbukk@gmail.com:
On Friday 25 May 2007 10:00 pm, Damien Cassou wrote: I notice two issues with your code. First, If the code in block runs fast, then it is quite possible for one to finish its execution before the next one gets scheduled. Interleaving is required only if a process hogs the processor for too long. So it is quite possible for the array to contain three runs of 1:10000 in proper sequence. This is not an error.
Secondly, yielding just lets background processes run, but does not guarantee that they have terminated. These processes have to signal the main process waiting in the foreground that they are done and then you can proceed to test the queue.
Thank you for this comments. That was what I was afraid of. Is it possible to have test that will show #nextPutAll: is not protected?
Thank you for this comments. That was what I was afraid of. Is it possible to have test that will show #nextPutAll: is not protected?
Maybe with the code simulation you can enforce that there are task-switches inbetween.
Lukas
2007/5/26, Lukas Renggli renggli@gmail.com:
Thank you for this comments. That was what I was afraid of. Is it possible to have test that will show #nextPutAll: is not protected?
Maybe with the code simulation you can enforce that there are task-switches inbetween.
What is that?
On Sat, 26 May 2007 14:12:59 +0200, Damien Cassou wrote:
2007/5/26, Lukas Renggli renggli@gmail.com:
Thank you for this comments. That was what I was afraid of. Is it possible to have test that will show #nextPutAll: is not protected?
Maybe with the code simulation you can enforce that there are task-switches inbetween.
What is that?
Let the three forks wait on their own semaphore in a circular manner (but would this show anything except that three processes do indeed wait for their own semaphore in a circular manner).
Wasn't something similiar posted here recently, together with aha's (cannot find it, dammned buzzword-based search engines ;-)
/Klaus
On Sat, 26 May 2007 14:12:59 +0200, Damien Cassou wrote:
Thank you for this comments. That was what I was afraid of. Is it possible to have test that will show #nextPutAll: is not protected?
I'm not convinced #nextPutAll: should be atomic. It would mean a consumer cannot start processing queued items before all elements are written. Right now, #nextPutAll: uses #nextPut: which is "protected". Nothing to fix IMHO.
The test to find out whether #nextPutAll: is atomic or not would be to have a higher-priority reader process that reads one element and peeks for the next. If that succeeds, the lower-priority writer process managed to put in more than one element, meaning #nextPutAll: is not atomic. With the current behavior, however, the reader process would be activated as soon as the first element was written and the peek would fail (I mean, answer nil).
- Bert -
Hi Bert,
on Sat, 26 May 2007 16:22:30 +0200, you wrote:
On Sat, 26 May 2007 14:12:59 +0200, Damien Cassou wrote:
Thank you for this comments. That was what I was afraid of. Is it possible to have test that will show #nextPutAll: is not protected?
I'm not convinced #nextPutAll: should be atomic. It would mean a consumer cannot start processing queued items before all elements are written. Right now, #nextPutAll: uses #nextPut:
In the example Damien used in this thread and in the implementation of #nextPutAll: in Nile and the classic WriteStream on String, #nextPut: is not used by #nextPutAll:.
Instead, #nextPutAll: updates at least one or more instance variables (besides appending the elements of the argument). This all together must be atomic, otherwise the system will be blown up.
/Klaus
On Sat, 26 May 2007 16:37:44 +0200, I wrote:
Hi Bert,
on Sat, 26 May 2007 16:22:30 +0200, you wrote:
On Sat, 26 May 2007 14:12:59 +0200, Damien Cassou wrote:
Thank you for this comments. That was what I was afraid of. Is it possible to have test that will show #nextPutAll: is not protected?
I'm not convinced #nextPutAll: should be atomic. It would mean a consumer cannot start processing queued items before all elements are written. Right now, #nextPutAll: uses #nextPut:
Sorry, this should read:
In the example Damien used in this thread and in the implementation of
Say Damien used String in his examples ...
/Klaus
#nextPutAll: in Nile and the classic WriteStream on String, #nextPut: is not used by #nextPutAll:.
Instead, #nextPutAll: updates at least one or more instance variables (besides appending the elements of the argument). This all together must be atomic, otherwise the system will be blown up.
/Klaus
On May 26, 2007, at 16:45 , Klaus D. Witzel wrote:
On Sat, 26 May 2007 16:37:44 +0200, I wrote:
Hi Bert,
on Sat, 26 May 2007 16:22:30 +0200, you wrote:
On Sat, 26 May 2007 14:12:59 +0200, Damien Cassou wrote:
Thank you for this comments. That was what I was afraid of. Is it possible to have test that will show #nextPutAll: is not protected?
I'm not convinced #nextPutAll: should be atomic. It would mean a consumer cannot start processing queued items before all elements are written. Right now, #nextPutAll: uses #nextPut:
Sorry, this should read:
In the example Damien used in this thread and in the implementation of
Say Damien used String in his examples ...
/Klaus
#nextPutAll: in Nile and the classic WriteStream on String, #nextPut: is not used by #nextPutAll:.
Instead, #nextPutAll: updates at least one or more instance variables (besides appending the elements of the argument). This all together must be atomic, otherwise the system will be blown up.
I thought we were talking about SharedQueue. Both in 3.8 and SharedQueue2 in 3.9 use #nextPut:.
- Bert -
Bert Freudenberg a écrit :
On Sat, 26 May 2007 14:12:59 +0200, Damien Cassou wrote:
Thank you for this comments. That was what I was afraid of. Is it possible to have test that will show #nextPutAll: is not protected?
I'm not convinced #nextPutAll: should be atomic. It would mean a consumer cannot start processing queued items before all elements are written. Right now, #nextPutAll: uses #nextPut: which is "protected". Nothing to fix IMHO.
The test to find out whether #nextPutAll: is atomic or not would be to have a higher-priority reader process that reads one element and peeks for the next. If that succeeds, the lower-priority writer process managed to put in more than one element, meaning #nextPutAll: is not atomic. With the current behavior, however, the reader process would be activated as soon as the first element was written and the peek would fail (I mean, answer nil).
- Bert -
I think Damien was trying to prove that nextPutAll: IS NOT atomic.
It can be pre-empted - if the process explicitely yield (not possible, there is not a yield call under nextPutAll:) - if the process is blocked on a Semaphore (could occur if write on external streams) - if a higher priority process is unblocked.
But i do not remember: would the Processor give hand to a process of same priority if none of above conditions happen? (based on round-robin or something).
If not, Damien will have to imagine more tricky code creating one of the 3 conditions above to demonstrate the feature.
Nicolas
Well Tim and John had mentioned that a given process gets preemited after around 250 ms, maybe the easiest thing is to just ensure the writes take longer, like:
(1 to: 10000) asOrderedCollection do: [ "..."] inBetweenDo: [ "something slow" ]
From: nicolas cellier ncellier@ifrance.com Reply-To: The general-purpose Squeak developers listsqueak-dev@lists.squeakfoundation.org To: squeak-dev@lists.squeakfoundation.org Subject: Re: Multiple processes using #nextPutAll: Date: Sat, 26 May 2007 18:16:26 +0200
Bert Freudenberg a écrit :
On Sat, 26 May 2007 14:12:59 +0200, Damien Cassou wrote:
Thank you for this comments. That was what I was afraid of. Is it possible to have test that will show #nextPutAll: is not protected?
I'm not convinced #nextPutAll: should be atomic. It would mean a consumer cannot start processing queued items before all elements are written. Right now, #nextPutAll: uses #nextPut: which is "protected". Nothing to fix IMHO.
The test to find out whether #nextPutAll: is atomic or not would be to have a higher-priority reader process that reads one element and peeks for the next. If that succeeds, the lower-priority writer process managed to put in more than one element, meaning #nextPutAll: is not atomic. With the current behavior, however, the reader process would be activated as soon as the first element was written and the peek would fail (I mean, answer nil).
- Bert -
I think Damien was trying to prove that nextPutAll: IS NOT atomic.
It can be pre-empted
- if the process explicitely yield (not possible, there is not a yield call
under nextPutAll:)
- if the process is blocked on a Semaphore (could occur if write on
external streams)
- if a higher priority process is unblocked.
But i do not remember: would the Processor give hand to a process of same priority if none of above conditions happen? (based on round-robin or something).
If not, Damien will have to imagine more tricky code creating one of the 3 conditions above to demonstrate the feature.
Nicolas
_________________________________________________________________ Catch suspicious messages before you open themwith Windows Live Hotmail. http://imagine-windowslive.com/hotmail/?locale=en-us&ocid=TXT_TAGHM_migr...
On May 26, 2007, at 18:17 , J J wrote:
Well Tim and John had mentioned that a given process gets preemited after around 250 ms
You must have misinterpreted that part. A given process continues to run indefinitely until one of the conditions listed by Nicolas below happens.
- Bert -
On May 26, 2007, at 18:16 , nicolas cellier wrote:
It can be pre-empted
- if the process explicitely yield
- if the process is blocked on a Semaphore
- if a higher priority process is unblocked.
Aha, ok, maybe the VM just checks every 250ms to see if a higher priority is ready to run.
But then, there probably will be one everytime (e.g. UI handling), so after the VM lets the UI handling routines run and what ever else, he puts back the last running process?
From: Bert Freudenberg bert@freudenbergs.de Reply-To: The general-purpose Squeak developers listsqueak-dev@lists.squeakfoundation.org To: The general-purpose Squeak developers listsqueak-dev@lists.squeakfoundation.org Subject: Re: Multiple processes using #nextPutAll: Date: Sat, 26 May 2007 18:24:08 +0200
On May 26, 2007, at 18:17 , J J wrote:
Well Tim and John had mentioned that a given process gets preemited after around 250 ms
You must have misinterpreted that part. A given process continues to run indefinitely until one of the conditions listed by Nicolas below happens.
- Bert -
On May 26, 2007, at 18:16 , nicolas cellier wrote:
It can be pre-empted
- if the process explicitely yield
- if the process is blocked on a Semaphore
- if a higher priority process is unblocked.
_________________________________________________________________ More photos, more messages, more storageget 2GB with Windows Live Hotmail. http://imagine-windowslive.com/hotmail/?locale=en-us&ocid=TXT_TAGHM_migr...
On May 26, 2007, at 18:26 , J J wrote:
Aha, ok, maybe the VM just checks every 250ms to see if a higher priority is ready to run.
That would mean you could only have 4 process switches per second which obviously is not true.
But then, there probably will be one everytime (e.g. UI handling), so after the VM lets the UI handling routines run and what ever else, he puts back the last running process?
Only if there is a single process at that priority. It's as if the process had called #yield voluntarily - the next runnable process of the same priority will be resumed once all higher-priority processes stopped.
At all times, the highest-priority runnable process will be executed. There always is a runnable process - if nothing else, it's the "idle process", which just gives up some processing time to the operating system (if there is one).
- Bert -
From: Bert Freudenberg bert@freudenbergs.de Reply-To: The general-purpose Squeak developers listsqueak-dev@lists.squeakfoundation.org To: The general-purpose Squeak developers listsqueak-dev@lists.squeakfoundation.org Subject: Re: Multiple processes using #nextPutAll: Date: Sat, 26 May 2007 20:00:08 +0200
That would mean you could only have 4 process switches per second which obviously is not true.
Oh, I'm confused again. Normal OS'es usually have a 250ms quantum. I think they said Squeak was 40 or so.
Only if there is a single process at that priority. It's as if the process had called #yield voluntarily - the next runnable process of the same priority will be resumed once all higher-priority processes stopped.
Yes, much like how modern OS'es work. It's just that I was under the impression that once the current process is interrupted that another at that same priority would be given a chance to run.
_________________________________________________________________ Make every IM count. Download Messenger and join the im Initiative now. Its free. http://im.live.com/messenger/im/home/?source=TAGHM_MAY07
On May 26, 2007, at 20:31 , J J wrote:
From: Bert Freudenberg bert@freudenbergs.de Reply-To: The general-purpose Squeak developers list<squeak- dev@lists.squeakfoundation.org> To: The general-purpose Squeak developers list<squeak- dev@lists.squeakfoundation.org> Subject: Re: Multiple processes using #nextPutAll: Date: Sat, 26 May 2007 20:00:08 +0200
That would mean you could only have 4 process switches per second which obviously is not true.
Oh, I'm confused again. Normal OS'es usually have a 250ms quantum. I think they said Squeak was 40 or so.
Perhaps we're talking past each other. Anyway, this shouldn't matter for the problem at hand.
Only if there is a single process at that priority. It's as if the process had called #yield voluntarily - the next runnable process of the same priority will be resumed once all higher- priority processes stopped.
Yes, much like how modern OS'es work. It's just that I was under the impression that once the current process is interrupted that another at that same priority would be given a chance to run.
Yes, that's what I wrote.
- Bert -
From: Bert Freudenberg bert@freudenbergs.de Reply-To: The general-purpose Squeak developers listsqueak-dev@lists.squeakfoundation.org To: The general-purpose Squeak developers listsqueak-dev@lists.squeakfoundation.org Subject: Re: Multiple processes using #nextPutAll: Date: Sat, 26 May 2007 21:55:10 +0200
Perhaps we're talking past each other. Anyway, this shouldn't matter for the problem at hand.
Or I'm not being clear enough. :)
Yes, much like how modern OS'es work. It's just that I was under the impression that once the current process is interrupted that another at that same priority would be given a chance to run.
Yes, that's what I wrote.
What I meant here is (and why this is relevant for the problem at hand):
If he forks the first one it is at some priority. Then he forks the next at the same priority. If the first one takes enough time (presumably around 40 ms) it will get preempted by the UI handlers, timer handlers or something. Now, when the higher priority processes are finished if it goes back to the one it was running before (i.e. the first one that was forked) then yes, you're right that it wont matter. But if it picks another from that list then he can cause the second thread to run while the first is still in the loop. This is what I was trying to say. :)
_________________________________________________________________ Make every IM count. Download Messenger and join the im Initiative now. Its free. http://im.live.com/messenger/im/home/?source=TAGHM_MAY07
On May 26, 2007, at 22:00 , J J wrote:
From: Bert Freudenberg bert@freudenbergs.de Reply-To: The general-purpose Squeak developers list<squeak- dev@lists.squeakfoundation.org> To: The general-purpose Squeak developers list<squeak- dev@lists.squeakfoundation.org> Subject: Re: Multiple processes using #nextPutAll: Date: Sat, 26 May 2007 21:55:10 +0200
Perhaps we're talking past each other. Anyway, this shouldn't matter for the problem at hand.
Or I'm not being clear enough. :)
Yes, much like how modern OS'es work. It's just that I was under the impression that once the current process is interrupted that another at that same priority would be given a chance to run.
Yes, that's what I wrote.
What I meant here is (and why this is relevant for the problem at hand):
If he forks the first one it is at some priority. Then he forks the next at the same priority. If the first one takes enough time (presumably around 40 ms) it will get preempted by the UI handlers, timer handlers or something. Now, when the higher priority processes are finished if it goes back to the one it was running before (i.e. the first one that was forked) then yes, you're right that it wont matter. But if it picks another from that list then he can cause the second thread to run while the first is still in the loop. This is what I was trying to say. :)
Okay - I was trying to say that it will indeed pick the next process, not the first one.
Btw, if you just #fork it will run at the same priority as the current process (which will be the UI process if yiu do this in a workspace). And at least in the default image there is no higher- priority process that loops at 40ms intervals. There is an "event tickler" at 500ms (see your process browser).
- Bert -
So Damien just has to fork a process at higher priority first, that will repeatedely wake up.
[100 timesRepeat: [(Delay forMilliseconds: 50) wait]] forkAtPriotity: etc...
If individual nextPutAll: process last longer than 50ms, then he will get what he wanted to see. Otherwise, reduce the delay.
But, question: what is the minimum delay (delay resolution)?
Nicolas
Bert Freudenberg a écrit :
On May 26, 2007, at 22:00 , J J wrote:
From: Bert Freudenberg bert@freudenbergs.de Reply-To: The general-purpose Squeak developers listsqueak-dev@lists.squeakfoundation.org To: The general-purpose Squeak developers listsqueak-dev@lists.squeakfoundation.org Subject: Re: Multiple processes using #nextPutAll: Date: Sat, 26 May 2007 21:55:10 +0200
Perhaps we're talking past each other. Anyway, this shouldn't matter for the problem at hand.
Or I'm not being clear enough. :)
Yes, much like how modern OS'es work. It's just that I was under the impression that once the current process is interrupted that another at that same priority would be given a chance to run.
Yes, that's what I wrote.
What I meant here is (and why this is relevant for the problem at hand):
If he forks the first one it is at some priority. Then he forks the next at the same priority. If the first one takes enough time (presumably around 40 ms) it will get preempted by the UI handlers, timer handlers or something. Now, when the higher priority processes are finished if it goes back to the one it was running before (i.e. the first one that was forked) then yes, you're right that it wont matter. But if it picks another from that list then he can cause the second thread to run while the first is still in the loop. This is what I was trying to say. :)
Okay - I was trying to say that it will indeed pick the next process, not the first one.
Btw, if you just #fork it will run at the same priority as the current process (which will be the UI process if yiu do this in a workspace). And at least in the default image there is no higher-priority process that loops at 40ms intervals. There is an "event tickler" at 500ms (see your process browser).
- Bert -
Hello Damien, Back in 2005, I ask to the list why no one was talking about implementing native threads in Squeak, I ranted a little, and got seriously beaten up by some heavy weights of the community. The important thing is that, in spite of the passionate discussion around the subject, VERY illustrative stuff about Squeak threading model came up in that thread. So if you care to read a little about the problems of native threads and green thread on Squeak and on different platforms as well, my original post is here:
http://lists.squeakfoundation.org/pipermail/squeak-dev/2005-April/090791.htm...
You can probably follow the thread from there. Cheers.
r.
"Ramiro Diaz Trepat" ramirodt@gmail.com writes:
So if you care to read a little about the problems of native threads and green thread on Squeak and on different platforms as well, my original post is here:
http://lists.squeakfoundation.org/pipermail/squeak-dev/2005-April/090791.htm...
You can probably follow the thread from there. Cheers.
Also, some good wiki pages have emerged after mailing-list discussions. See:
http://wiki.squeak.org/squeak/382
Lex
On 26-May-07, at 1:51 PM, nicolas cellier wrote:
So Damien just has to fork a process at higher priority first, that will repeatedely wake up.
[100 timesRepeat: [(Delay forMilliseconds: 50) wait]] forkAtPriotity: etc...
If individual nextPutAll: process last longer than 50ms, then he will get what he wanted to see. Otherwise, reduce the delay.
But, question: what is the minimum delay (delay resolution)?
It *tries* to be 1mS. No guarantees though; a long running blocking primitive (a complex bitBlt copying and transforming a 10MP 32bpp image would probably count, or a large GC run for example) will almost certainly cause a longer delay. We spent a lot of effort about three years ago trying to get the system to be reasonably reliable wrt the mS timer.
If you fork several processes at the same priority as the forker then they will get added to the quiescent queue for that priority and will only get to run when a) some higher priority process has pre-empted the scheduler OR the active process waits on a semaphore OR otherwise yields b) they get to the front of the queue In practice you will often find that they run sequentially. As mentioned it is possible to fork a tickler process to stir things up on a regular (or even irregular) basis if you want.
And I'm sure I will have forgotten some detail or other.
tim -- tim Rowledge; tim@rowledge.org; http://www.rowledge.org/tim Strange OpCodes: NNI: Neglect Next Instruction
Thank you for this comments. That was what I was afraid of. Is it possible to have test that will show #nextPutAll: is not protected?
Maybe with the code simulation you can enforce that there are task-switches inbetween.
What is that?
c1 := writingBlock asContext reentrant. c2 := writingBlock asContext reentrant. c3 := writingBlock asContext reentrant.
100 timesRepeat: [ c1 := c1 step ]. 200 timesRepeat: [ c2 := c2 step ]. 300 timesRepeat: [ c3 := c3 step ].
Now you have 3 block contexts c1, c2 and c3 that are each suspended inside the execution of #nextPutAll:. Think of having clicked in the debugger 100, 200 and 300 times on the button step. You can inspect these contexts by opening a dedicated inspector:
(Debugger context: c1) openFullNoSuspendLabel: ''
Note: #step is a bit buggy in certain cases, so take care ;-)
Lukas
queue := self newQueue. writingBlock := [queue nextPutAll: (1 to: 10000)]. writingBlock fork; fork; fork. Processor yield. self assert: (queue next: 10000) asArray = (1 to: 10000) asArray. self assert: (queue next: 10000) asArray = (1 to: 10000) asArray. self assert: (queue next: 10000) asArray = (1 to: 10000) asArray. self assert: queue atEnd.
I assume it's because when I fork, the work is done before the following fork starts.
Can somebody help me writing this test?
I assume that nextPutAll: relies on do: to fetch all elements of the collection. So you could define a BlockingCollection, which contains a collection, and a semaphore as instance variables. Then, do: would do
do: aBlock collection do:[:each| semaphore wait. aBlock value: each. semaphore signal. ].
Alternatively, if it relies on at: also implement
at: index | result | semaphore wait. result := collection at: index. semaphore signal. ^result.
Then, in the test, you do
sem := Semaphore new. writingBlock := [queue nextPutAll: (BlockingCollection on: (1 to: 10000) with: sem)]. sem signal.
HTH, Martin
Yes, that is a very clever solution. You could tidy this up a little by using:
Array subclass: #YieldingArray instanceVariableNames:'' classVariableNames: '' poolDictionaries: '' category: 'Temp'
YieldingArray>>do: aBlock "Evaluate aBlock with all the arguments. Yield in the middle." 1 to: self size do:[:i| aBlock value: i. Processor yield. ].
#yield will take care of things without a semaphore due to Squeak's process scheduling rules.
Cheers, - Andreas
Martin v. Löwis wrote:
queue := self newQueue. writingBlock := [queue nextPutAll: (1 to: 10000)]. writingBlock fork; fork; fork. Processor yield. self assert: (queue next: 10000) asArray = (1 to: 10000) asArray. self assert: (queue next: 10000) asArray = (1 to: 10000) asArray. self assert: (queue next: 10000) asArray = (1 to: 10000) asArray. self assert: queue atEnd.
I assume it's because when I fork, the work is done before the following fork starts.
Can somebody help me writing this test?
I assume that nextPutAll: relies on do: to fetch all elements of the collection. So you could define a BlockingCollection, which contains a collection, and a semaphore as instance variables. Then, do: would do
do: aBlock collection do:[:each| semaphore wait. aBlock value: each. semaphore signal. ].
Alternatively, if it relies on at: also implement
at: index | result | semaphore wait. result := collection at: index. semaphore signal. ^result.
Then, in the test, you do
sem := Semaphore new. writingBlock := [queue nextPutAll: (BlockingCollection on: (1 to: 10000) with: sem)]. sem signal.
HTH, Martin
On 29-May-07, at 12:40 AM, Andreas Raab wrote:
Yes, that is a very clever solution. You could tidy this up a little by using:
Even better, yield is a very small, fast prim and so using it would have little chance of significantly affecting overall performance of such an algorithm. Much better than the rather longwinded version of yield that used to exist.
tim -- tim Rowledge; tim@rowledge.org; http://www.rowledge.org/tim Useful random insult:- A gross ignoramus -- 144 times worse than an ordinary ignoramus.
squeak-dev@lists.squeakfoundation.org