Multy-core CPUs

Igor Stasenko siguctua at gmail.com
Thu Oct 18 15:36:00 UTC 2007


On 18/10/2007, Sebastian Sastre <ssastre at seaswork.com> wrote:
> Hey this sounds a an interesting path to me. If we think in nature and it's
> design, that images could be analog to cells of a larger body. Fragmentation
> keep things simple without compromising scalability. Natural facts concluded
> that is more efficient not to develop few supercomplex brain cells but to
> develop zillions of a far simpler brain cells, this is, that are just
> complex enough, and make them able to setup in an inimaginable super complex
> network: a brain.
>
> Other approach that also makes me conclude this is interesting is that we
> know that one object that is too smart smells bad. I mean it easily starts
> to become less flexible so less scalable in complexity, less intuitive (you
> have to learn more about how to use it), more to memorize, maintain,
> document, etc. So it is smarter but it could happen that it begins to become
> a bad deal because of beign too costly. Said so, if we think in those
> flexible mini images as objects, each one using a core we can scale
> enourmusly and almost trivially in this whole multicore thing and in a way
> we know it works.
>
> Other interesting point is faul tolerance. If one of those images happen to
> pass a downtime (because a power faliure on the host where they where
> running or whatever reason) the system could happen to feel it somehow but
> not being in a complete faiure because there are other images to handle
> demand. A small (so efficient), well protected critical system can
> coordinate measures of contention for the "crisis" an hopefully the system
> never really makes feel it's own crisis to the users.
>
> Again I found this is a tradeof about when to scale horizontally or
> vertically. For hardware, Intel and friends have scaled vertically (more
> bits and Hz for instance) for years as much as they where phisically able to
> do it. Now they reached a kind of barrier and started to scale horizontally
> (adding cores). Please don't fall in endless discussions, like the ones I
> saw out there, about comparing apples with bannanas because they are fruits
> but are not comparable. I mean it's about scaling but they are 2 different
> axis of a multidimensional scaling (complexity, load, performance, etc).
>
> I'm thinking here as vertical being to make one squeak smarter to be capable
> to be trhead safe and horizontal to make one smart network of N squeaks.
>
> Sometimes one choice will be a good business and sometimes it will be the
> other. I feel like the horizontal time has come. If that's true, to invest
> (time, $, effort) now in vertical scaling could happen to be have a lower
> cost/benefit rate if compared to the results of the investiment of
> horizontal scaling.
>
> The truth is that this is all speculative and I don't know. But I do trust
> in nature.
>

I often thought myself about making an ST 'vertical' (by making it
multithreaded with single shared memory). Now, after reading this post
i think your approach is much better.
Then i think, it would be good to make some steps towards supporting
multiple images by single executable:
- make single executable capable of running a number of images in
separate native threads.
This will save memory resources and also could help in making
inter-image messaging not so costly.


>         Cheers,
>
> Sebastian Sastre
>
> > -----Mensaje original-----
> > De: squeak-dev-bounces at lists.squeakfoundation.org
> > [mailto:squeak-dev-bounces at lists.squeakfoundation.org] En
> > nombre de Ralph Johnson
> > Enviado el: Jueves, 18 de Octubre de 2007 08:09
> > Para: The general-purpose Squeak developers list
> > Asunto: Re: Multy-core CPUs
> >
> > On 10/17/07, Steve Wart <steve.wart at gmail.com> wrote:
> > > I don't know if mapping Smalltalk processes to native
> > threads is the
> > > way to go, given the pain I've seen in the Java and C# space.
> >
> > Shared-memory parallelism has always been difficult.  People
> > claimed it was the language, the environment, or they needed
> > better training.
> > They always thought that with one more thing, they could "fix"
> > shared-memory parallelism and make it usable.  But Java has
> > done a good job with providiing reasonable language
> > primitives.  There has been a lot of work on making threads
> > efficient, and plenty of people have learned to write
> > mutli-threaded Java.  But it is still way too hard.
> >
> > I think that shared-memory parallism, with explicit
> > synchronization, is a bad idea.  Transactional memory might
> > be a solution, but it eliminates explicit synchronization.  I
> > think the most likely solution is to avoid shared memory
> > altogether, and go with message passing.
> > Erlang is a perfect example of this.  We could take this
> > approach in Smalltalk by making minimal images like Spoon,
> > making images that are designed to be used by other images
> > (angain, like Spoon), and then implementing our systms as
> > hundreds or thousands of separate images.
> > Image startup would have to be very fast.  I think that this
> > is more likely to be useful than rewriting garbage collectors
> > to support parallelism.
> >
> > -Ralph Johnson
> >
>
>
>


-- 
Best regards,
Igor Stasenko AKA sig.



More information about the Squeak-dev mailing list