Multy-core CPUs

tim Rowledge tim at rowledge.org
Sun Oct 21 05:25:59 UTC 2007


I'm sure I'm going to regret saying anything in this thread but what  
the hell...
>
> Sure you could have N threads - native or green - each running ONE  
> (Squeak) Smalltalk process running in one protected memory space as  
> long as NONE of these threads share ANY memory between them.
I don't think that was actually specified but yes that would be one  
way of doing it.


> That means no objects being shared.

No it doesn't. Objects can be shared via communication; that's what  
messages are  for. It only happens that the Smalltalk systems we're  
mostly used to share by sharing memory.

> That means each Smalltalk process is really it's own image running  
> independently with only one user level Smalltalk process.
Works for me. Like an ecology of cells. Now where did I hear that  
analogy before? Oh, yes, I think it might have been either Alan Kay's  
doctoral thesis or an early talk on the idea of objects.


> Any communication between them with objects is serialized and  
> transmitted in some manner that - oh dear - avoids using shared  
> memory or a shared communications "buffer" between two or more  
> threads.

Transputer.

> This is done even though sending messages via shared memory buffers  
> in one protected memory space is very efficient or sending messages  
> via a shared memory space when more than one protected memory space  
> is in use is also very efficient.
Only applies in the limited sphere of the typical single processor  
systems we have got used to over the last few years. And for the  
purposes of any discussion about real parallelism, current 2/4/8 core  
systems are really a poor quality patch on the single cpu idea.

A key idea that people are going to have to get used to is, oddly  
enough, just like the one they had to get used to in order to accept  
late binding and dynamic languages. That is, to paraphrase a old  
quotation from Dan Ingalls (I'm pretty certain)
"we've got past the stage of worrying about the number of  
computational cycles. we need to start worrying about the quality"
Nominal inefficiency in having message passing across processes done  
by something 'slower' than shared memory may be a key to allowing  
massively spread computation. Trading the 'efficiency' of hand coded  
assembler for higher level languages made it more practical to build  
bigger programs that could do more. Trading the 'efficiency' of C for  
a decent late bound language allows more conceptually complex  
problems to be tackled. Trading the 'efficiency' of shared memory as  
a medium for sharing information with some other transmission method  
may be the lever to unlock really complex systems.

I think it's time people read up a bit on some computational history.  
Quite a bit of this stuff was worked on in the old days before the  
x86 started to dominate the world with a saurian single-core  
brutishness. Learn about Occam for example.

And Peter, before I "explain in full detail and completely how it  
would actually work" how about you explain in full detail and  
completely how you're going to fund my research? ;-)


tim
PS another 'randomly chosen' sigline that manages to be eerily  
appropriate
--
tim Rowledge; tim at rowledge.org; http://www.rowledge.org/tim
Try not to let implementation details sneak into design documents.





More information about the Squeak-dev mailing list