Multy-core CPUs

Peter William Lount peter at smalltalk.org
Sun Oct 21 04:33:39 UTC 2007


Hi Ralph,

It's good to converse again with you. It's been many years.

>>  I've not yet seen any serious discussion of the case for your point of view
>> which bridges the gap of complexity in concurrency as automatic memory
>> management magically does. Please illuminate us with specific and complete
>> details of your proposal for such a breakthrough in concurrency complexity.
>>     
>
> Peter, Jason is not saying that eliminating shared memory will make
> concurrent programming as easy as automatic memory management. 

That's good for that makes no sense given real world experience with 
systems that don't use shared memory as a basis for their currency control.


>  What he said is that, just like a system that mostly uses automatic memory
> management might use manual memory management in a few places, so a
> system that mostly uses message passing for concurrency might use
> threads and semaphores in a few places.
>   

Ok, that sounds nice and rosey but so far. Can someone please explain in 
full detail and completely how it would actually work? Thanks.


>>  Making the Squeak VM fully multi-threaded (natively) is going to be a lot
>> of pain and hard to get right. Just ask the Java VM team.
>>
>>
>>  Then either the hard work needs to be done, or the VM needs to be
>> completely rethought.
>>     
>
> What Jason said was that, for any VM design, making the VM fully
> mutl-threaded is hard.  It has nothing to do with Squeak or with the
> Squeak VM.
>   

Yes, I'm clear about that. That's why I said that the hard work needs to 
be done by those of us who are knowledgeable and more experienced than 
the typical programmer using Smalltalk. We are supposed to be systems 
people aren't we? We are supposed to do the hard work so that others 
have an easier time aren't we? Of course if we can avoid the hard work 
then I'm all for it. However when it comes to concurrency control and 
program consistency it just isn't possible to avoid the hard work.

>>  The pay back of adding this obsolete (except in the lowest level
>> cases) method of dealing with threading just isn't going to be worth
>> the pain to implement it.
>>
>>
>>  What are you going on about? What techniques are you saying are obsolete
>> exactly? How are they obsolete?
>>     
>
> He is saying that shared memory parallel programming is obsolete.  It
> doesn't scale.  By the time we get to thousands of processors (which
> will be only a decade) then it won't work at all.  Experience shows
> that it doesn't work very well know even when hardware can support it,
> because it is just too hard to make correct pograms using that model.
>   

So, two processes that share a chunk of RAM memory across their 
protected memory spaces is obsolete in your view?

What about two or N light weight threads (aka Smalltalk processes) in 
one memory space sharing objects in that single memory space? Is that 
obsolete as well?


> Jason's point, which I agree with, is that programming with threads in
> shared memory and using semaphores (or monitors, or critical sections)
> to eliminate interference is a bad idea.  

Ok. I get that it's complex and that there are scaling issues with some 
of the techniques when N-core is very large. I don't see how it's a bad 
idea though - I don't see how it's any worse than the alternative that's 
being suggested.

> Parallel programming with no
> shared memory, i.e. by having processes only communicate with shared
> memory, is much easier to program.
>   

So you mean one thread per protected memory space? No light weight 
threads (since they are using shared memory by definition)? Not more 
than one Smalltalk process per protected memory space? Just one thread 
of execution for each operating system process/task? So if I want one 
hundred Smalltalk processes running in my application I will need one 
hundred operating system processes?

All objects are to be copied across the memory space boundaries via 
serialized objects or via references (for copying later or for return 
messages back to the originating node later)?

No "active" object running in it's own operating system process can 
respond to more than one inbound message at once? Since it only has one 
thread/Smalltalk process to avoid shared memory it must complete all the 
work that the current message send caused. What about deadlock avoidance 
in your model?

Maybe I'm misunderstanding your definitions but it seems to me that that 
is what is implied by what you are saying.

To ensure clarity on this complex topic please provide definitions and 
full explanations with examples. Please be very detailed. Thanks very much.

All the best,

Peter





-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.squeakfoundation.org/pipermail/squeak-dev/attachments/20071020/7317f3b3/attachment.htm


More information about the Squeak-dev mailing list