Multy-core CPUs

Peter William Lount peter at smalltalk.org
Thu Oct 25 06:12:13 UTC 2007


Hi,

Slicing up the data objects and shipping them in one deep copied parcel 
from one Smalltalk image to another isn't a general solution. It may be 
a solution for some problems that can be simplified but it will NOT work 
for the majority of problems. In fact I'd find a "deep copy and split up 
the data objects only solution" quite useless for most of the parallel 
problems that I'm working on solving.

In the general large scale case that I gave as an example (in an earlier 
email) the one million data objects could be retrieved or accessed by 
any of the 10,000 processes used in the example. While shipping them all 
in a deep copied parcel in one message is possible it's not always the 
wisest move. If the compute notes are going off line than it may be 
required but otherwise the general case of shipping references and a 
core set of objects is a better approach. In the example it was the 
"search patterns" that were sliced up across the processes. By slicing 
up the data objects across the processes the example given won't even 
work! This of course alters the example in a dramatic way. Now this 
might be successful for that group of problems, such as rendering where 
the pieces are independent. A key characteristic of the general problems 
is that the data objects can and must be accessible from ANY of the 
forked off processes with ANY of them being able to alter the objects at 
any point in time with those changes being propagated back to the 
central node (assuming there is just one central node) when a commit 
occurs and then updating the other forked off processes that have an 
interest in seeing updates to objects in mid transaction. Some of these 
changes will of course nullify the work of some of these processes 
requiring them to abort and possibly start over with the newest changes. 
Too many interacting changes between the processes will of course cause 
too many aborts and retry (assuming that's the chosen mechanism for 
dealing with overlapping changes that are mutually exclusive resulting 
in inconsistency to the data objects).

So while it's useful for some problems to simplify the problem to the 
split up the data and spread it across N process nodes it's not viable 
for a much larger set of problems that it is viable for. Solving for 90% 
of the cases will thus require much more than what is being proposed by 
the simplify concurrency at the loss of capability proponents.

It should be noted that there isn't one solution for the general case. 
What are needed are solutions that cover various chunks of the solution 
space and a way of selecting the correct solution mechanisms either 
manually or automatically (preferred if viable). Then the "deep copy and 
split up the data objects only solution" may do it's part as a piece in 
a wider matrix of solutions. To dispense with the tools we need to solve 
the general problems is folly IMHV (In My Humble View).

An excellent book for learning the ins and outs of concurrency control - 
and most importantly the common mistakes - is the free PDF book, "The 
Little Book of Semaphores",  by Allen B. Downey and his students: 
http://www.greenteapress.com/semaphores/downey05semaphores.pdf. Enjoy 
the full power of parallel programming.

As an aside, one of the reasons that we don't have better object filing 
out across all the Smalltalk versions is that the original Smalltalk 
only provided the extremes of shallow and deep copying of objects. To 
work the middle ground where only portions of an object graph were 
copied took a lot of work since you had to write it from scratch each 
time. What is needed is a general purpose method of doing this important 
job for the widest range of use cases.

All the best,

Peter William Lount



More information about the Squeak-dev mailing list