Concurrent Futures

Joshua Gargus schwa at fastmail.us
Wed Oct 31 17:07:35 UTC 2007


On Oct 31, 2007, at 6:09 AM, Igor Stasenko wrote:
>
> If we look at multi-core problem as networking problem then, to what i
> see, a shared memory helps us in minimizing traffic between cores.

Shared memory is an abstraction that pretends that there is no  
traffic between cores, but of course there really is.  Letting  
hardware threads access objects "at random" (i.e. with no regard to  
their location in memory) will certainly not help us minimize traffic  
between cores; why do you think it will?

> Because we don't need to spend time of serializing data and transfer
> it between cores if its located in shared memory and can be easily
> accessed from both ends.
> But share-nothing model proposes to not use shared memory, which in
> own turn means that there will be a much higher traffic between cores
> comparing to model which uses shared memory.

It implies nothing of the sort.  The shared-nothing model gives you  
control over this traffic.  The model that you propose gives you no  
control; I think it will probably give degenerate results in  
practice, with lots of needless cache overhead.  Do you think that  
the performance will scale linearly w/ each processor added?  It  
seems unlikely to me.  If you disagree, please explain why.

BTW the time spent serializing data is completely irrelevant when  
considering traffic between cores.  Also, I think it will be a small  
overhead on overall performance.  The reason is the, in practice, the  
amount of data sent between cores/images will be small.  It will be  
trivial for the application programmer to measure the number and size  
of messages set between images, and to design the computation so that  
the overhead is low (i.e. lots of computation happens in-image for  
each message between images).

> So, there a balance should be found between network load and using
> shared resources. We can't win if we choose one of opposite sides,
> only something in the middle.

There are some cases where it doesn't make sense to serialize data  
into a message.  If I have a large video "file" in a ByteArray in one  
image, and I want to play it (decode, upload to OpenGL, etc.), I  
don't want to serialize the whole thing.  It would be much more  
efficient to ensure that GC won't move it, and then just pass a  
pointer to the data.  I don't think that this sort of thing should be  
disallowed.

I think we agree on this point.

Thanks,
Josh

> I am still wrong here?
>
> -- 
> Best regards,
> Igor Stasenko AKA sig.
>




More information about the Squeak-dev mailing list