native threads

Ned Konz ned at squeakland.org
Mon Apr 18 15:25:40 UTC 2005


On Monday 18 April 2005 6:55 am, Ramiro Diaz Trepat wrote:
> I think it is not a proper attitude to consider such an important
> issue as the Squeak threading model not ever using native threads as
> closed.

> In spite your employer does not use them, I'm sure native threads will
> be needed in some context.

I don't think it's a closed issue. It's just that no one has yet felt that 
having native threads is important enough to bother doing all the work 
required to get them. After all, this is something that would probably impact 
everything from the VM up through the image.

And I suppose I should mention that the VMs actually *do* use native threads 
(on some platforms) for convenience; it's just that the Squeak interpreter 
only uses a single thread.

Native threads at the VM level can ease interfacing with external libraries.

> The most judicious answer (I think it was from Göran), suggested that
> probably the best model would be to have a pool of native threads and
> a larger pool of green threads running on them.

Best for what?

> And if you remember, I reacted because I did not like the idea of
> sprinkling my code with Processor yield statements.  Then,  Ned and
> others clarified that I should never ever write that statement.  

I think that my feeling is more that "you shouldn't really need to call yield 
in a "real world" application". Your toy example (two tight loops filling a 
single shared queue) is not a model that would often be useful in real code.

And your expectation (that the filling of the queue would be interleaved 
somehow) was out of line with the way things actually worked.

In fact, even if we *had* native threads, you'd have no guarantee that the 
filling of the queue wouldn't happen the way that you saw it happen in 
Squeak. This is because nothing in the original code was acting to force that 
interleaving. A typical OS-level timeslicing multitasker will let a single 
thread run for a slice before preempting it; in your case, one of the threads 
could easily run for long enough to write all of its results to the queue 
before being preempted.

If you have an application that *requires* strict interleaving, then the best 
way to do it would probably be to make a type of SharedQueue that does a 
yield after each object is written to it. This way, the yield would not have 
to be in the individual worker processes. But this is a response to a special 
desire, not something that needs to be provided more generally.

> But I 
> think I got the confusion because the only way to get that famous
> SharedQueue filled unsequentially was including the yields in the
> blocks or else... rewriting the scheduler ! :)

I think we've pointed out that there are other alternatives.

I showed that having a higher-priority thread would force timeslicing by 
preempting lower priority processes.

And my suggestion above (of adding a yield at the SharedQueue) would also do 
what you want.

-- 
Ned Konz
http://bike-nomad.com/squeak/



More information about the Squeak-dev mailing list