Distributed intelligence or not

Lothar Schenk lothar.schenk at gmx.de
Sat May 8 10:46:16 UTC 2004

Gary McGovern wrote:

> I'm struggling a bit at the moment, trying to justify distributed
> intelligence in program design. Or if centralised is better, or whether
> there should be a shift between centralised and distributed depending upon
> the nature or size of of the program/system. Maybe it is because what I am
> mostly doing is small - writing centalised seems quicker and simpler to
> write. And distributing the intelligence I do because it is good form - my
> course texts say do it. Maybe in the long term it works better.

Doing something just because someone else says so is never satisfactory, even 
if it should work. Usually, there is a reason why certain things are 
recommended, and this reason can be examined for validity. Often this 
examination reveals specific conditions under which a certain course of 
action is valid and also when it is not.

With regard to your question, you may find it worthwhile reading Kevin Kelly's 
book "Out Of Control", which deals with what he calls "vivisystems", artifial 
systems that are modeled after the way biological systems function. I found a 
reference to this book on Ted Kaehler's web site, and a little bit of web 
searching revealed that it is available online in its entirety here: 
(I just ordered my printed copy via amazon, too.)

One interesting point he makes (which apparently also gave the book its title) 
is that massively parallel self-organizing processing systems will, at the 
extreme end, be de facto uncontrollable systems, at least not in the way we 
can control the linear sequential systems we have known so far. Here's a 

"As our inventions shift from the linear, predictable, causal attributes of 
the mechanical motor, to the crisscrossing, unpredictable, and fuzzy 
attributes of living systems, we need to shift our sense of what we expect 
from our machines. A simple rule of thumb may help:

      For jobs where supreme control is demanded, good old clockware is the 
way to go.

      Where supreme adaptability is required, out-of-control swarmware is what 
you want.

For each step we push our machines toward the collective, we move them toward 
life. And with each step away from the clock, our contraptions lose the cold, 
fast optimal efficiency of machines. Most tasks will balance some control for 
some adaptability, and so the apparatus that best does the job will be some 
cyborgian hybrid of part clock, part swarm. The more we can discover about 
the mathematical properties of generic swarm processing, the better our 
understanding will be of both artificial complexity and biological 

Regards, Lothar

More information about the Squeak-dev mailing list