Concurrent Futures

Igor Stasenko siguctua at gmail.com
Wed Oct 31 00:26:45 UTC 2007


On 31/10/2007, Joshua Gargus <schwa at fastmail.us> wrote:
>
> On Oct 30, 2007, at 3:28 PM, Igor Stasenko wrote:
>
> > On 30/10/2007, Jason Johnson <jason.johnson.081 at gmail.com> wrote:
> >> On 10/30/07, Igor Stasenko <siguctua at gmail.com> wrote:
> >>> which is _NOT_ concurrent
> >>> computing anymore, simple because its not using shared memory,
> >>> and in
> >>> fact there is no sharing at all, only a glimpse of it.
> >>
> >> Huh?  What does sharing have to do with concurrency?  The one and
> >> only
> >> thing shared state has to do with concurrency is the desire to speed
> >> it up, i.e. a premature optimization.  That's it.
> >>
> > Look. A current multi-core architecture uses shared memory. So the
> > logical way how we can utilize such architecture at maximum power is
> > to build on top of it.
> > Any other (such as share nothing) introducing too much noise on such
> > architectures.
>
> It is unreasonable to assume that ad-hoc, fine-grained sharing of
> objects between processors will give you the fastest performance on
> the upcoming machines with 100s and 1000s of cores.  What about
> memory locality and cache coherency?  It is not cheap to juggle an
> object between processors now, and it will become more expensive as
> the number of cores increase.
>

> In a different email in the thread, you made it clear that you
> consider distributed computing to be a fundamentally different beast
> from concurrency.  Intel's chip designers don't see it this way.  In
> fact, they explicitly formulate inter-core communication as a
> networking problem.  For example, see http://www.intel.com/technology/
> itj/2007/v11i3/1-integration/4-on-die.htm (I've better links in the
> past, but this is the best that I could quickly find now).
>
Then i wonder, why they don't drop the idea of having shared memory at all?
Each CPU then could have own memory, and they could interact by
sending messages in network-style fashion. And we then would write a
code which uses such architecture in best way. But while this is not
true, should we assume that such code will work faster than code which
'knows' that there is a single shared memory for all CPUs and uses
such knowledge in best way?

> I think that your proposal is very "clever", elegant, and fun to
> think about.
Thanks :)

> But I don't see what real problem it solves.  It
> doesn't help the application programmer write correct programs (you
> delegate this responsibility to the language/libraries).  It doesn't
> make code at maximum speed, since it doesn't handle memory locality.
> In short, it seems like too much work to do for such uncertain
> gains... I think that we can get farther by examining some of our
> assumptions before we start, and revising or goals accordingly.
>
I thought that goals was pretty clear. We have a single image. And we
want to run multiple native threads upon it to utilize all cores of
multi-core CPU's.
What we currently have is a VM, which can't do that. So, i think, any
other , even naively implemented, which can do, is better than
nothing.
If you have any ideas how such VM would look like i'm glad to hear.

-- 
Best regards,
Igor Stasenko AKA sig.



More information about the Squeak-dev mailing list