Concurrent Futures

Igor Stasenko siguctua at gmail.com
Wed Oct 31 13:09:06 UTC 2007


On 31/10/2007, Andreas Raab <andreas.raab at gmx.de> wrote:
> Igor Stasenko wrote:
> > Then i wonder, why they don't drop the idea of having shared memory at all?
>
> The major reason is cost, not performance. With a single shared memory
> subsystem you can allocate memory dynamically to the cores as you need
> it. Not using shared memory at all means you need to pre-allocate memory
> for each core. Which leaves you with two options: Either over-allocate
> memory for each core (expensive) or assume that the programmer can keep
> relatively small caches utilized effectively. The PS2 had that approach
> and failed miserably (this is one of the reasons why it took so long
> before the games could actually utilize its full power - keeping those
> caches filled was a major pain in the neck despite the bandwidth and
> computational power available).
>
> The same effect can be seen with GPUs - the cheapest (usually Intel)
> GPUs utilize shared (main) memory to drive cost down. But that's all. It
> doesn't mean that just because Intel likes cheap graphics they're the
> fastest (in fact, the precise opposite is true - lots of VRAM and a fast
> bus outperforms shared memory by far).
>
> > Each CPU then could have own memory, and they could interact by
> > sending messages in network-style fashion. And we then would write a
> > code which uses such architecture in best way. But while this is not
> > true, should we assume that such code will work faster than code which
> > 'knows' that there is a single shared memory for all CPUs and uses
> > such knowledge in best way?
>
> No. But the opposite isn't necessarily true either. We shouldn't assume
> either way, we should measure and compare. And not only cycles but also
> programming effort, correctness and robustness.
>
> > I thought that goals was pretty clear. We have a single image. And we
> > want to run multiple native threads upon it to utilize all cores of
> > multi-core CPU's.
> > What we currently have is a VM, which can't do that. So, i think, any
> > other , even naively implemented, which can do, is better than
> > nothing.
> > If you have any ideas how such VM would look like i'm glad to hear.
>

If we look at multi-core problem as networking problem then, to what i
see, a shared memory helps us in minimizing traffic between cores.
Because we don't need to spend time of serializing data and transfer
it between cores if its located in shared memory and can be easily
accessed from both ends.
But share-nothing model proposes to not use shared memory, which in
own turn means that there will be a much higher traffic between cores
comparing to model which uses shared memory.
So, there a balance should be found between network load and using
shared resources. We can't win if we choose one of opposite sides,
only something in the middle.
I am still wrong here?

-- 
Best regards,
Igor Stasenko AKA sig.



More information about the Squeak-dev mailing list