David Chase wrote:
I don't know if this has much relevance to Squeak, but in my experience programmers, and I think people in general, are much happier thinking sequentially. The sort of bugs that occur with unstructured parallelism are just too Martian for most people to contemplate. On the other hand, if every method is "synchronized" (to use the Java phrase), it is generally too expensive, and also not enough (to refer back to someone else's transaction example -- books must balance).
Whether people are "happier thinking sequentially" is a very important question. [As an aside, people clearly don't think sequentially - the brain has huge amounts of internal parallelism. And I think it is just an illusion that this parallelism is only at a low level (e.g. neurons). Read Minsky's Society Theory of Mind ( http://www.media.mit.edu/people/minsky/ ) for example.]
Thinking sequentially about problems that are inherently concurrent is suboptimal. Here's an excerpt from "MultiLogo: A Study of Children and Concurrent Programming" ( http://el.www.media.mit.edu/groups/el/Papers/mres/MultiLogo/MultiLogo.html ) where a fourth-grade girl (F) who had programmed Logo for a year was asked to "program" a situation in which she had to sweep the floor and her brother had to set the table for dinner:
F: I'm going to tell him to start setting the table, which he won't do. Anyway, he starts on the table, then I go back and start sweeping.
M: So you're going to ask him to set the table, and then you're going to start sweeping.
F: But when I tell him to do it, he only does one plate. But there are five people that he has to set places for and he only sets one plate. Because he has to repeat this thing. And I have to repeat. I only make one stroke with my broom. But we have to repeat it forever. But there is no way both of us...
I think her problem is that her model of programming is sequential as a result of her year of Logo programming. And the world isn't sequential. Think of sports teams. Think of traffic. Think of the internal concurrency in walking. In games. Think about bank account transfers. The Incredible Machine. Cooking. An orchestra. And so on.
"Unstructured parallelism" is fine UNLESS the concurrent activities can interfere with each other. That's why I'm not excited about taking sequential languages and adding threads or the like. You need to redesign from scratch to deal well with concurrency.
There's been a lot of discussion here of transactions. I agree that programming transactions using locks is just "too Martian". [Maybe that's why the ToonTalk help character is Marty the Martian ;-)] Here's how I would program them in ToonTalk. I'd program a bank account robot to respond to a message to give exclusive attention. The message contains a nest where subsequent messages will arrive. (And probably a bird that is given an acknowledgement that the message was processed.) The robot takes the nest where messages normally arrive and stores it somewhere in its box and puts the incoming nest where that nest was. In case you want to back out of an aborted transaction the robot should have copied its box and stored it as well. Now when the sender of the message has exclusive control over this bank account. All of its messages go to the nest it sent and messages from anyone else go to the stored nest. To end exclusive control a message is sent releasing it and a robot handles it by throwing away the copy of the old state and the nest and puts the original nest back where it was. (And any messages that were sent in the meanwhile are still on top and are now processed). This isn't very complicated - I bet a typical 10-year old familar with ToonTalk can master this. As with any code, bugs are possible. The most serious one being to forget to end exclusive control when you are done.
I also think that truly asynchronous communication is a dubious idea. The worst parts of Java (the deprecated bits) are more or less asynchronous (Thread.stop,suspend,resume). We ended up (in our Java system) hiding almost all asynchrony; threads yield at well-defined points, garbage collection occurs at well-defined points, etc, all enforced by the compiler/VM (in the safe language, there are no infinite loops, period).
and in "Little Deadlocks and the Three Bears" by Jecel Mattos de Assumpcao Jr. ( http://www.lsi.usp.br/~jecel/stories/deadlocks.html ) he defines asynchronous:
"asynchronous - the synchronous messages used in CSP couple the communication of information with the communication of events. With asynchronous messages the two are separated (communication of event is then done with semaphores and similar structures) allowing great flexibility in system design. This also means there are great opportunities for mistakes, so understanding and debugging asynchronous systems is not a trivial task. "
These problems with asynchronous messages don't match my experience. Maybe these problems are completely a consequence of mixing asynchrony and sequential programming.
I am afraid that I am perhaps focusing on this at too low a level for this crowd, but the subject of the thread is "lots of concurrency", and I am not sure what that is really intended to mean. Normally, I think it implies performance.
To me concurrency means simultaneous activities (and if just running on a single processor then at least conceptually simultaneous). Parallelism (running on multiple CPUs under your control) and distributed programming (running on communicating computers typically controlled by different parties) are special cases. Simultaneous activities are a good idea when there is a conceptual match, when there are CPUs going idle, or when it is inherent (e.g. distributed computing).
Best,
-ken kahn ( www.toontalk.com )
We're getting into some of my favorite literature, so I wanted to jump in here.
And I think it is just an illusion that this parallelism is only at a low level (e.g. neurons). Read Minsky's Society Theory of Mind ( http://www.media.mit.edu/people/minsky/ ) for example.
But also consider Herb Simon's arguments in opposition -- and Simon has a lot more empirical evidence in his favor. I don't have an opinion on which is right yet, but I don't think that this is a settled point.
Here's an excerpt from "MultiLogo: A Study of Children and Concurrent Programming" ( http://el.www.media.mit.edu/groups/el/Papers/mres/MultiLogo/MultiLogo.html )
I really love Mitchel's MultiLogo work, but part of what I love about it is his honesty in how *confusing* students found all the concurrency. For example, he tells a great story about a student who kills the launching process, and can't understand why the launched process doesn't stop, too.
I think her problem is that her model of programming is sequential as a result of her year of Logo programming. And the world isn't sequential. Think of sports teams. Think of traffic. Think of the internal concurrency in walking. In games. Think about bank account transfers. The Incredible Machine. Cooking. An orchestra. And so on.
I don't find the "year of Logo programming" argument convincing. There are too many studies (most prominently the Pea and Kurland work, but even Idit Harel's and Yasmin Kafai's versions of ISDP) that shows that not much gets learned in a year of programming. That deep mindsets about the universe get changed in a single year is a stretch. (For example, Idit's and Yasmin's studies have taken more than a full year.)
The other examples (sports teams, traffic, etc.) seem more an argument that students hold a centralized, sequential model of the universe -- consider Mitchel's work with StarLogo and how hesitant the students were to release the centralized models.
It should be noted that Mitchel's StarLogo work is a dissertation about MENTAL MODELS OF CONTROL, *NOT* programming. I asked Mitchel once about the interface that students used to StarLogo, and he told me that he was it. None of his subjects actually wrote any of those programs! Rather, they told Mitchel about their ideas, and he coded them -- explaining what he was doing -- and then worked with the kids to understand the results. It's important to note that the kids didn't write the code. They might have been able to, but that hasn't been tested As far as I know, there have been no empirical studies of kids programming in StarLogo -- we don't know if it would work for the average kid. So, we can't use StarLogo as an example of a concurrent programming language that works for kids.
Mark
-------------------------- Mark Guzdial : Georgia Tech : College of Computing : Atlanta, GA 30332-0280 Associate Professor - Learning Sciences & Technologies. Collaborative Software Lab - http://coweb.cc.gatech.edu/csl/ (404) 894-5618 : Fax (404) 894-0673 : guzdial@cc.gatech.edu http://www.cc.gatech.edu/gvu/people/Faculty/Mark.Guzdial.html
Fascinating and important questions being debated in this thread; I just wanted to ask Mark (or anyone else who knows) what the Herb Simon arguments and evidence in opposition to the "naturalness" of parallelism/concurrency in programming languages he was referring to. Perhaps a brief textual summary of the arguments, and maybe a pointer to the evidence?
- Jerry Balzano
At 10:38 AM -0700 10/25/01, Mark Guzdial wrote:
We're getting into some of my favorite literature, so I wanted to jump in here.
And I think it is just an illusion that this parallelism is only at a low level (e.g. neurons). Read Minsky's Society Theory of Mind ( http://www.media.mit.edu/people/minsky/ ) for example.
But also consider Herb Simon's arguments in opposition -- and Simon has a lot more empirical evidence in his favor. I don't have an opinion on which is right yet, but I don't think that this is a settled point.
------------------------- Dr. Gerald J. Balzano Teacher Education Program Dept of Music Laboratory for Comparative Human Cognition Cognitive Science Program UC San Diego La Jolla, CA 92093 (619) 822-0092 gjbalzano@ucsd.edu
As far as I know, Herb Simon didn't argue about naturalness of parallelism in programming languages. Herb Simon was one of the pioneers of cognitive science, and he argued that the mind was single-threaded. (Simon was also one of the most amazing intellects of our time -- a Nobel prize winner in Economics and a Turing Award winner in computer science)
Mark
Fascinating and important questions being debated in this thread; I just wanted to ask Mark (or anyone else who knows) what the Herb Simon arguments and evidence in opposition to the "naturalness" of parallelism/concurrency in programming languages he was referring to. Perhaps a brief textual summary of the arguments, and maybe a pointer to the evidence?
- Jerry Balzano
At 10:38 AM -0700 10/25/01, Mark Guzdial wrote:
We're getting into some of my favorite literature, so I wanted to jump in here.
And I think it is just an illusion that this parallelism is only at a low level (e.g. neurons). Read Minsky's Society Theory of Mind ( http://www.media.mit.edu/people/minsky/ ) for example.
But also consider Herb Simon's arguments in opposition -- and Simon has a lot more empirical evidence in his favor. I don't have an opinion on which is right yet, but I don't think that this is a settled point.
Dr. Gerald J. Balzano Teacher Education Program Dept of Music Laboratory for Comparative Human Cognition Cognitive Science Program UC San Diego La Jolla, CA 92093 (619) 822-0092 gjbalzano@ucsd.edu
-------------------------- Mark Guzdial : Georgia Tech : College of Computing : Atlanta, GA 30332-0280 Associate Professor - Learning Sciences & Technologies. Collaborative Software Lab - http://coweb.cc.gatech.edu/csl/ (404) 894-5618 : Fax (404) 894-0673 : guzdial@cc.gatech.edu http://www.cc.gatech.edu/gvu/people/Faculty/Mark.Guzdial.html
pointers?
david
At 09:46 PM 10/25/01 -0400, you wrote:
As far as I know, Herb Simon didn't argue about naturalness of parallelism in programming languages. Herb Simon was one of the pioneers of cognitive science, and he argued that the mind was single-threaded. (Simon was also one of the most amazing intellects of our time -- a Nobel prize winner in Economics and a Turing Award winner in computer science)
Mark
Fascinating and important questions being debated in this thread; I just wanted to ask Mark (or anyone else who knows) what the Herb Simon arguments and evidence in opposition to the "naturalness" of parallelism/concurrency in programming languages he was referring to. Perhaps a brief textual summary of the arguments, and maybe a pointer to the evidence?
- Jerry Balzano
At 10:38 AM -0700 10/25/01, Mark Guzdial wrote:
We're getting into some of my favorite literature, so I wanted to jump in here.
And I think it is just an illusion that this parallelism is only at a low level (e.g. neurons). Read
Minsky's
Society Theory of Mind ( http://www.media.mit.edu/people/minsky/ ) for example.
But also consider Herb Simon's arguments in opposition -- and Simon has a lot more empirical evidence in his favor. I don't have an opinion on which is right yet, but I don't think that this is a settled point.
Dr. Gerald J. Balzano Teacher Education Program Dept of Music Laboratory for Comparative Human Cognition Cognitive Science Program UC San Diego La Jolla, CA 92093 (619) 822-0092 gjbalzano@ucsd.edu
Mark Guzdial : Georgia Tech : College of Computing : Atlanta, GA 30332-0280 Associate Professor - Learning Sciences & Technologies. Collaborative Software Lab - http://coweb.cc.gatech.edu/csl/ (404) 894-5618 : Fax (404) 894-0673 : guzdial@cc.gatech.edu http://www.cc.gatech.edu/gvu/people/Faculty/Mark.Guzdial.html
-- David Farber dfarber@numenor.com
But Herb didn't play Bach on the pipe organ *or* think about what he was doing when driving a car, or even just walking and talking and looking and feeling .... his intellect was "amazing" (in a very special sense of that word)...
Cheers,
Alan
-------
At 9:46 PM -0400 10/25/01, Mark Guzdial wrote:
As far as I know, Herb Simon didn't argue about naturalness of parallelism in programming languages. Herb Simon was one of the pioneers of cognitive science, and he argued that the mind was single-threaded. (Simon was also one of the most amazing intellects of our time -- a Nobel prize winner in Economics and a Turing Award winner in computer science)
Mark
Fascinating and important questions being debated in this thread; I just wanted to ask Mark (or anyone else who knows) what the Herb Simon arguments and evidence in opposition to the "naturalness" of parallelism/concurrency in programming languages he was referring to. Perhaps a brief textual summary of the arguments, and maybe a pointer to the evidence?
- Jerry Balzano
At 10:38 AM -0700 10/25/01, Mark Guzdial wrote:
We're getting into some of my favorite literature, so I wanted to jump in here.
And I think it is just an illusion that this parallelism is only at a low level (e.g. neurons). Read Minsky's Society Theory of Mind ( http://www.media.mit.edu/people/minsky/ ) for example.
But also consider Herb Simon's arguments in opposition -- and Simon has a lot more empirical evidence in his favor. I don't have an opinion on which is right yet, but I don't think that this is a settled point.
Dr. Gerald J. Balzano Teacher Education Program Dept of Music Laboratory for Comparative Human Cognition Cognitive Science Program UC San Diego La Jolla, CA 92093 (619) 822-0092 gjbalzano@ucsd.edu
Mark Guzdial : Georgia Tech : College of Computing : Atlanta, GA 30332-0280 Associate Professor - Learning Sciences & Technologies. Collaborative Software Lab - http://coweb.cc.gatech.edu/csl/ (404) 894-5618 : Fax (404) 894-0673 : guzdial@cc.gatech.edu http://www.cc.gatech.edu/gvu/people/Faculty/Mark.Guzdial.html
Hello Alan, I have to pull you up on something here. A person playing keyboard, driving a car and so on doesn't always think about what they are doing. Often it is trained into the body or mind or whatever through repetition and practice and maybe in a linear fashion. Regards, Gary
----- Original Message ----- From: "Alan Kay" Alan.Kay@squeakland.org To: squeak-dev@lists.squeakfoundation.org Sent: Friday, October 26, 2001 4:13 AM Subject: RE: Lots of concurrency
But Herb didn't play Bach on the pipe organ *or* think about what he was doing when driving a car, or even just walking and talking and looking and feeling .... his intellect was "amazing" (in a very special sense of that word)...
Cheers,
Alan
At 9:46 PM -0400 10/25/01, Mark Guzdial wrote:
As far as I know, Herb Simon didn't argue about naturalness of parallelism in programming languages. Herb Simon was one of the pioneers of cognitive science, and he argued that the mind was single-threaded. (Simon was also one of the most amazing intellects of our time -- a Nobel prize winner in Economics and a Turing Award winner in computer science)
Mark
Fascinating and important questions being debated in this thread; I just wanted to ask Mark (or anyone else who knows) what the Herb Simon
arguments
and evidence in opposition to the "naturalness" of
parallelism/concurrency
in programming languages he was referring to. Perhaps a brief textual summary of the arguments, and maybe a pointer to the evidence?
- Jerry Balzano
At 10:38 AM -0700 10/25/01, Mark Guzdial wrote:
We're getting into some of my favorite literature, so I wanted to jump
in
here.
And I think it is just an illusion that this parallelism is only at a low level (e.g. neurons). Read
Minsky's
Society Theory of Mind ( http://www.media.mit.edu/people/minsky/ ) for example.
But also consider Herb Simon's arguments in opposition -- and Simon has a lot more empirical evidence in his favor. I don't have an opinion on which is right yet, but I don't think that this is a settled point.
Dr. Gerald J. Balzano Teacher Education Program Dept of Music Laboratory for Comparative Human Cognition Cognitive Science Program UC San Diego La Jolla, CA 92093 (619) 822-0092 gjbalzano@ucsd.edu
Mark Guzdial : Georgia Tech : College of Computing : Atlanta, GA
30332-0280
Associate Professor - Learning Sciences & Technologies. Collaborative Software Lab - http://coweb.cc.gatech.edu/csl/ (404) 894-5618 : Fax (404) 894-0673 : guzdial@cc.gatech.edu http://www.cc.gatech.edu/gvu/people/Faculty/Mark.Guzdial.html
--
I fear this debate seems to be taking an odd twist into irrelevancy. While it may be true (indeed unsurprising) that human cognition can be modelled most appropriately using concurrency; indeed of a promiscuous nature, I do not see how this indicates or counterindicates that a concurrent model is the best model for human cognition to specify the execution of system.
However self-aware we may be, such awareness of the means how we think does not make us think best using that means. As others have noted, concurrency is hard. it is hard, not only because languages do not facilitate concurrent programming (though many do), but because concurrency (or its abstraction, indeterminacy) is hard. It is trivial to write a plausible concurrent program that is broken, and very, very hard to find and diagnose the bases and source of a bug.
This is not to say that promiscuous determinacy doesn't also add complexity -- it most certainly does. An example as trivial as the swap operation of two values x and y, which is invariably modelled using a temporary variable
t := x; x := y; y := t
introduces complexity unrelated to the problem. Indeed, Edsger Dijkstra emphasized the problem of overdetermining code, favoring therefor the language construct of the concurrent assignment, so swap can be indicated thus:
x,y := y,x
leaving the more detailed sequencing to the "system" to sort out. The problem here is that such nondetermincy is well-constrained and easily implemented in worst case. The sequencing and protection of shared resources in concurrent programs is far more subtle and problematic.
My point is that while it is apparent that overdetermining code sequentially often introduces levels of detail that precludes more elegant expression of correct code, and sometimes distracts from best ways to articulate the code, so, too, does expression of concurrency introduce subtle unstated bugs and requires stating degrees of sequencing, which are often harder to express than merely specifying the sequencing in all its gory detail.
If one is actually playing *music* when playing Bach on the pipe organ, than one is indeed thinking deeply about all the parts at once, how they intertwine and what they might "mean", both separately and in combination. Same when one is improvising in counterpoint. Same, if one is sight reading and trying to have real music flow out. I'm afraid that one really does think about these in parallel combination (as I said, very much like watching a theatrical production with multiple actors on the stage, but sonically). It's learnable, and lots of people have learned how. I believe that many advanced thinking "skills" have quite a bit in common with all this. You can learn how to have multiple "thinkers" working on different aspects of ideas, all together.
Cheers,
Alan
---- At 11:05 PM +0000 10/28/01, Gary McGovern wrote:
Hello Alan, I have to pull you up on something here. A person playing keyboard, driving a car and so on doesn't always think about what they are doing. Often it is trained into the body or mind or whatever through repetition and practice and maybe in a linear fashion. Regards, Gary
----- Original Message ----- From: "Alan Kay" Alan.Kay@squeakland.org To: squeak-dev@lists.squeakfoundation.org Sent: Friday, October 26, 2001 4:13 AM Subject: RE: Lots of concurrency
But Herb didn't play Bach on the pipe organ *or* think about what he was doing when driving a car, or even just walking and talking and looking and feeling .... his intellect was "amazing" (in a very special sense of that word)...
Cheers,
Alan
At 9:46 PM -0400 10/25/01, Mark Guzdial wrote:
As far as I know, Herb Simon didn't argue about naturalness of parallelism in programming languages. Herb Simon was one of the pioneers of cognitive science, and he argued that the mind was single-threaded. (Simon was also one of the most amazing intellects of our time -- a Nobel prize winner in Economics and a Turing Award winner in computer science)
Mark
Fascinating and important questions being debated in this thread; I just wanted to ask Mark (or anyone else who knows) what the Herb Simon
arguments
and evidence in opposition to the "naturalness" of
parallelism/concurrency
in programming languages he was referring to. Perhaps a brief textual summary of the arguments, and maybe a pointer to the evidence?
- Jerry Balzano
At 10:38 AM -0700 10/25/01, Mark Guzdial wrote:
We're getting into some of my favorite literature, so I wanted to jump
in
here.
And I think it is just an illusion that this parallelism is only at a low level (e.g. neurons). Read
Minsky's
Society Theory of Mind ( http://www.media.mit.edu/people/minsky/ ) for example.
But also consider Herb Simon's arguments in opposition -- and Simon has a lot more empirical evidence in his favor. I don't have an opinion on which is right yet, but I don't think that this is a settled point.
Dr. Gerald J. Balzano Teacher Education Program Dept of Music Laboratory for Comparative Human Cognition Cognitive Science Program UC San Diego La Jolla, CA 92093 (619) 822-0092 gjbalzano@ucsd.edu
Mark Guzdial : Georgia Tech : College of Computing : Atlanta, GA
30332-0280
Associate Professor - Learning Sciences & Technologies. Collaborative Software Lab - http://coweb.cc.gatech.edu/csl/ (404) 894-5618 : Fax (404) 894-0673 : guzdial@cc.gatech.edu http://www.cc.gatech.edu/gvu/people/Faculty/Mark.Guzdial.html
--
As a footnote, it would seem that women have brains which multi-task much better than men's.
Any evidence for enhanced parallelism in women, or is this just a crude OS characteristic?
Cheers
John
It is true John. I have my own proof but, John Grey the Author of the "Men are from Mars and Women are from Venus" (ex budist monk of 28 years I believe) describes it very well.
----- Original Message ----- From: "John Hinsley" jhinsley@telinco.co.uk To: squeak-dev@lists.squeakfoundation.org Sent: Monday, October 29, 2001 6:36 PM Subject: Re: Lots of concurrency
As a footnote, it would seem that women have brains which multi-task much better than men's.
Any evidence for enhanced parallelism in women, or is this just a crude OS characteristic?
Cheers
John
-- If you don't care about your data, like file systems which automagically destroy themselves and have money to burn on 3rd party tools to keep your system staggering on, Microsoft (tm) have the Operating System for you.
Justin Walsh wrote:
It is true John. I have my own proof but, John Grey the Author of the "Men are from Mars and Women are from Venus" (ex budist monk of 28 years I believe) describes it very well.
Maybe there's another book in this: "Men are from Microsoft, Women are from Unix"?
I've not read the Mars/Venus book, but have seen some pretty convincing experiments demonstrating vastly enhanced multi-tasking in women over men. Different connections between right and left brain have been posited as explanations.
Of course, women joke about men's inability to do more than one thing at once all the time!
Cheers
John
----- Original Message ----- From: "John Hinsley" jhinsley@telinco.co.uk To: squeak-dev@lists.squeakfoundation.org Sent: Monday, October 29, 2001 6:36 PM Subject: Re: Lots of concurrency
As a footnote, it would seem that women have brains which multi-task much better than men's.
Any evidence for enhanced parallelism in women, or is this just a crude OS characteristic?
----- Original Message ----- From: "John Hinsley" jhinsley@telinco.co.uk To: squeak-dev@lists.squeakfoundation.org Sent: Monday, October 29, 2001 8:45 PM Subject: Re: Lots of concurrency
Of course, women joke about men's inability to do more than one thing at once all the time!
And Russian women complain about their men being unable to do anything all at once, all the time. :-) Regards, Gary (The author of the email denies being the author of the joke)
Hi.
If one is actually playing *music* when playing Bach on the pipe organ, than one is indeed thinking deeply about all the parts at once, how they intertwine and what they might "mean", both separately and in combination. Same when one is improvising in counterpoint. Same, if one is sight reading and trying to have real music flow out. I'm afraid that one really does think about these in parallel combination (as I said, very much like watching a theatrical production with multiple actors on the stage, but sonically). It's learnable, and lots of people have learned how. I believe that many advanced thinking "skills" have quite a bit in common with all this. You can learn how to have multiple "thinkers" working on different aspects of ideas, all together.
Isn't a theatrical production an account of a long exchange of messages which are answered? If so, how many simultaneous "processes" would a typical theatrical production have? Same question for sitcoms? I suspect the answer, at least for sitcoms, is never greater than 2. Evidently there could be many more processes than 2 as long as they don't execute at the same time. I don't think there would be more than 5 anyway (lower bound of 7±2), otherwise the authors would be reducing their audience.
When I was learning how to type I noticed that I'd think what to say, spell it out, then that stuff would be mapped into a queue of things to write... think faster than you type and you lost track of what you wanted to write. Think slower than you type and your hands will wait until the queue fills up. When the queue is full enough, a daemon (now I can say I have a daemon for this, after ~20 years of typing on a qwerty keyboard) comes around and maps each letter into something one of my fingers would have to do. Usually, there's some sort of digram recognition as well, like "ll", shift+' -> " and so on.
I wonder if this is how people read music when they are interpreting it... maybe they develop multiline decoders for each line of the pentagram, which are put into a queue, which then are processed by the daemon, which puts all that stuff in another queue, which is sent using an internal clock down to the hands and feet. Or maybe it's just a linear decoder which reads each beat of the pentagram top down?...
How many fingers are usually used simultaneously to depress keys when playing say the piano? When playing the organ with pedals and manuals, I'd be inclined to suspect that the magic number is ~7... two feet, 5 fingers. Therefore, a key to execution difficulty is the max number of simultaneous keystrokes (if you will). Music could be even scored for difficulty like that... choose a suitable number x>1. Then the difficulty involved in playing something without error would be:
Sum{0<=i<=12} n(i)x^i
where n(x) = number of times in the score in which you have to make x depressions. Thus players could be scored. Of course there are other factors such as playing speed etc...
Andres.
How many fingers are usually used simultaneously to depress keys when playing say the piano? When playing the organ with pedals and manuals, I'd be inclined to suspect that the magic number is ~7... two feet, 5 fingers. Therefore, a key to execution difficulty is the max number of simultaneous keystrokes (if you will). Music could be even scored for difficulty like that... choose a suitable number x>1. Then the difficulty involved in playing something without error would be:
I've played pieces on piano that have 8-9 fingers at a time plus pedaling. I've played pieces that have just 2 fingers and no pedal that are much harder. So, the number of mental concepts doesn't map directly the number of appendages in use. And let's not forget, hitting the right notes is a bare beginning; usually the goal is to make music, not to hit all the right notes and win some sort of points.
The muddying factor here is chunking. Familiar chords or riffs will seem like single mental chunks, not groups of individual notes. A good musician will have lots of chunks and so doesn't have to think as hard. Note that no musician has ever been good without a lot of training -- they must be learning *something* during that time!
Overall, the music examples aren't overwhelming. A consistent alternative view is that many things are automatic, that high-level thought is still restricted to a single stream, and in fact that most of what people do is automatic.
-Lex
Hi.
I've played pieces on piano that have 8-9 fingers at a time plus pedaling. I've played pieces that have just 2 fingers and no pedal that are much harder. So, the number of mental concepts doesn't map directly the number of appendages in use. The muddying factor here is chunking. Familiar chords or riffs will seem like single mental chunks, not groups of individual notes. A good musician will have lots of chunks and so doesn't have to think as hard. Note that no musician has ever been good without a lot of training -- they must be learning *something* during that time!
It seems it would be fair to say that music is harder to play when the decoding of the score generates more chunks. Then, music would be harder when the chunks you know are not enough to keep the decoder's chunk output in check so there's no finger chunk buffer overruns, right?
Andres.
The muddying factor here is chunking. Familiar chords or riffs will seem like single mental chunks, not groups of individual notes. A good musician will have lots of chunks and so doesn't have to think as hard. Note that no musician has ever been good without a lot of training -- they must be learning *something* during that time!
It seems it would be fair to say that music is harder to play when the decoding of the score generates more chunks. Then, music would be harder when the chunks you know are not enough to keep the decoder's chunk output in check so there's no finger chunk buffer overruns, right?
Andres.
That's what I was getting at. I could be full of it, but it sounds good I think. :) If I'm not mistaken, objects in Smalltalk are actually intended to match up with mental chunks. Certainly during the late 80's and early 90's it was a popular interpretation of OO.
I still think there's even more going on with music, by the way. Bigger chunking helps with all of them, though!
1. Decoding the music, which is entwined with: 1a. predicting the music, based on general music knowledge 1b. remembering the particular piece
2. Engineering the emotional landscape, for lack of a good description. Music is usually meant to move people, and it's not just a matter of "doing well".
3. Choosing which fingers to use (a multi-level plan in itself -- pick a general strategy, then assign individual fingers.)
4. Positioning the skeletal system (mainly wrists, elbows, and shoulders)
5. Oh yeah, actually pressing the keys down. There's more than one way.
This doesn't even touch on practice, either, which is *not* just playing the piece over and over.
And it still seems simplified. So much of it, even for an amateur, is just doing what is automatic, and pulling out all the individual pieces is tough. It's like talking, really -- it's mostly automatic, but there's clearly a *lot* going on. Try to explain how to give a rousing speech, or even just to chat with someone--there's a lot going on!
And to return to the topic, I'm not sure that one can do high-level problem solving with more than one of these activities at a time. Certainly you can *react* and probably *consider* more than one at a time, but higher-level stuff?
-Lex
On Thursday 25 October 2001 10:01 am, Ken Kahn wrote:
David Chase wrote:
I don't know if this has much relevance to Squeak, but in my experience programmers, and I think people in general, are much happier thinking sequentially. The sort of bugs that occur with unstructured parallelism are just too Martian for most people to contemplate. On the other hand, if every method is "synchronized" (to use the Java phrase), it is generally too expensive, and also not enough (to refer back to someone else's transaction example -- books must balance).
Whether people are "happier thinking sequentially" is a very important question. [As an aside, people clearly don't think sequentially - the brain has huge amounts of internal parallelism. And I think it is just an illusion that this parallelism is only at a low level (e.g. neurons). Read Minsky's Society Theory of Mind ( http://www.media.mit.edu/people/minsky/ ) for example.]
Thinking sequentially about problems that are inherently concurrent is suboptimal.
But ToonTalk itself still provides support for sequential programming (from my brief experience with the demo): you program a robot by having it record your (sequential) actions. Though by not allowing a robot to give a box to another robot, you're disallowing the equivalent of subroutine calls (right?).
Isn't it possible in ToonTalk to run into the same kind of problems with concurrency that you describe with other systems?
For instance, wouldn't it be possible to do the bank account example in ToonTalk in such a way that you could have inconsistent results, that is, the same as the problem with:
accountA balance: accountA balance - 50. accountB balance: accountB balance + 50.
You think about solutions from the point of view of a computer scientist who's familiar with all the potential problems of concurrency; would it even occur to a kid or programming newbie to _not_ do it unsafely in ToonTalk?
[K.Kahn:]
And I think it is just an illusion that this parallelism is only at a low level (e.g. neurons). Read Minsky's Society Theory of Mind ( http://www.media.mit.edu/people/minsky/ ) for example.
[M.Guzdial:]
But also consider Herb Simon's arguments in opposition -- and Simon has a lot more empirical evidence in his favor. I don't have an opinion on which is right yet, but I don't think that this is a settled point.
[J.Balzano:]
Perhaps a brief textual summary of the arguments, and maybe a pointer to the evidence?
Allow me to bug Mark again for what the "lot more empirical evidence in his [Simon's] favor" he's referring to. I took down my copy of Simon's "Sciences of the Artificial" book (first edition, from my grad student days), where in Ch.2 on "The Psychology of Thinking", Simon says in his Conclusion that "the evidence is overwhelming that the system is basically serial in its operation", but paging through the chapter I find very little evidence at all, much less anything that could be considered "overwhelming"! And even if I'm missing something here, the fact remains that Simon is talking in this chapter about how people do solo problem-solving like the famous cryptarithmetic problem "DONALD+GERALD=ROBERT". This is a very different context from that of (a) people's (mental) models of how the world works (as I think Andres V points out), or for that matter (b) how they would even do GOFPS (good old fashioned problem-solving) in a distributed, multi-agent situation.
-Jerry B
------------------------- Dr. Gerald J. Balzano Teacher Education Program Dept of Music Laboratory for Comparative Human Cognition Cognitive Science Program UC San Diego La Jolla, CA 92093 (619) 822-0092 gjbalzano@ucsd.edu
[J.Balzano:]
Perhaps a brief textual summary of the arguments, and maybe a pointer to the evidence?
Allow me to bug Mark again for what the "lot more empirical evidence in his [Simon's] favor" he's referring to. I took down my copy of Simon's "Sciences of the Artificial" book (first edition, from my grad student days), where in Ch.2 on "The Psychology of Thinking", Simon says in his Conclusion that "the evidence is overwhelming that the system is basically serial in its operation"
"Sciences of the Artificial" is more on design than psychology. "Protocol Analysis" with Ericsson is where I've heard more of the evidence that the mind is single-threaded (e.g., similar to the current work showing that driving and cell phone use reduces attention to each), but I'd bet that "Models of Bounded Rationality" has it, too.
Mark
-------------------------- Mark Guzdial : Georgia Tech : College of Computing : Atlanta, GA 30332-0280 Associate Professor - Learning Sciences & Technologies. Collaborative Software Lab - http://coweb.cc.gatech.edu/csl/ (404) 894-5618 : Fax (404) 894-0673 : guzdial@cc.gatech.edu http://www.cc.gatech.edu/gvu/people/Faculty/Mark.Guzdial.html
There's been a lot of discussion here of transactions. I agree that programming transactions using locks is just "too Martian". [Maybe that's why the ToonTalk help character is Marty the Martian ;-)] Here's how I would program them in ToonTalk. I'd program a bank account robot to respond to a message to give exclusive attention. The message contains a nest where subsequent messages will arrive. (And probably a bird that is given an acknowledgement that the message was processed.) The robot takes the nest where messages normally arrive and stores it somewhere in its box and puts the incoming nest where that nest was. In case you want to back out of an aborted transaction the robot should have copied its box and stored it as well. Now when the sender of the message has exclusive control over this bank account. All of its messages go to the nest it sent and messages from anyone else go to the stored nest. To end exclusive control a message is sent releasing it and a robot handles it by throwing away the copy of the old state and the nest and puts the original nest back where it was. (And any messages that were sent in the meanwhile are still on top and are now processed). This isn't very complicated - I bet a typical 10-year old familar with ToonTalk can master this. As with any code, bugs are possible. The most serious one being to forget to end exclusive control when you are done.
Nice example! This is exactly how Kats works. Each object maintains separate state for each transaction in which it is involved. When the transaction is complete, all objects involved in the transaction are told to commit their new state. This is useful in concurrent processing because different processes can work simultaneously, but in different transactions and not step on each other (note: multiple processes can also work in the same transaction concurrently). However, transactions is just one way of dealing with concurrent processing.
Transactions are all about being able to move a system from one state to another, while being able to easily get out of a bad situation that could leave the system in an inconsistent state. Concurrency is handled by creating a sort of "parallel universe" where work can be discarded if it goes bad, or if it conflicts with something that was done in another universe.
But, this is not appropriate for all situations. For example, you could use transactions to allow concurrent processes to maniupulate the positions of a couple of morphs on screen. A conflict would be defined as the two morphs occupying the same position on screen. Both processes do their work, commit their changes, then the screen gets updated. If ever the two morphs move into the same space, one of the transactions would fail to commit. However, another solution is to simply check the space in which the morph is about to move for the presence of another morph and avoid the conflict. The complexity cannot be eliminated, it can only be shifted from one place to another.
If we had infinite computing power, we wouldn't need either solution...we could just establish the constraints and have them enforced at all times. It would be physically impossible for the two morphs to occupy the same space at the same time, just as it is physically impossible for two people to occupy the same space at the same time.
- Stephen
On Thursday 25 October 2001 13:01, Ken Kahn wrote:
I think her problem is that her model of programming is sequential as a result of her year of Logo programming. [...]
I agree 100%. There is no way that she would have come up with a time slicing solution if she hadn't been taught to use it in these kinds of situations. Check out the cover of the Logo issue of Byte magazine (August 1982): it is a listing for a program where the single turtle is time sliced to simulate four turtles.
[3 bears definition of asynchronous]
These problems with asynchronous messages don't match my experience. Maybe these problems are completely a consequence of mixing asynchrony and sequential programming.
Exactly - these happened in the context of people using C tools on the Transputer (I used Occam or assembler instead). With synchronous messages you get an automatic send/receive order on the client (since sending implies waiting for a reply) and receive....send on the server (in the case where receiving is implicit and follows automatically after sending the reply to the previous messages).
With assynchronous messages I saw a lot of send/send or receive/receive bugs in other people's code though it was never a problem for me. Note that you can have the exact same problems with Occam's kind of synchronous messages. Hmmm... perhaps I should have talked about "one way" and "two way" messages instead?
-- Jecel
[ a near-rant follows ]
At 10:01 AM 10/25/2001 -0700, Ken Kahn wrote:
Thinking sequentially about problems that are inherently concurrent is suboptimal.
I disagree. My favorite examples, not necessarily relevant to Squeak, are:
1) network (web) servers.
The task is inherently parallel; there can be many connections in progress at one time. However, the task is greatly simplified by organizing it as a bunch of sequential threads.
2) finite element models (game of life, cellular automata).
The problem is *described* from the point-of-view of a single cell, and the parallelism is then added. The differential equation itself describes the behavior *of a point*. There are cases where it is more efficient to think of the aggregate acting in parallel (for instance, the physical properties of an I-beam) but often the most interesting problems are those where we *do not know the aggregate behavior* -- there, we must work from first principles. (It depends, very much, one what we are using the computer/program for -- designing a house, we work with standard aggregate-behavior-understood materials, but designing a scramjet or a high-performance propeller, we must work from finite elements.)
And, if you consider how these are usually programmed, the time step and grid size are manipulated so that they really are very sequential and step-at-a-time. Just for example, the time step is chosen small enough (usually) so that the rate-of-change in the neighbors can be ignored (and in the cases with which I am vaguely familiar, when the rate of change in neighbors is considered, the time step is still chosen to be small enough that the "2nd derivative" is zero). One of the problems I recall (part of Thom Potempa's dissertation, I think) was dealing with cases where the change was inherently discontinuous -- in the case of oil reservoir steam injection, boiling and condensation of water at critical pressure/temperature combinations.
The use of (synchronized) time steps also helps avoid the non-uniform progress problem (which is the one that most often catches me when I write parallel code).
In particular, I suspect that T.I.M. uses tiny time steps, and treats each object as a sequential process. It may not be coded that way (for efficiency) but I am almost certain that is how it was modeled.
In contrast, working on parallel programs, I have seen/heard of programmers across a wide range of skill levels (everyone from the more academic members of the Modula-3 design team, to a tenured professor at MIT, to Los Alamos rocket scientists, to just ordinary programmers) get wedged numerous times with mistaken assumptions about atomicity, progress of computing, progress of communication, and equivalence of different concurrent programs. Things like:
WHILE boolean_flag DO (* busy wait *) ;
which may never, ever see the value of boolean_flag (there are efficiency-related reasons for this to be so), to the possibility of a compiler transforming
while ... { synchronized (foo) { single_character_output } }
into
synchronized (foo) { while ... { single_character_output } }
(This is not equivalent -- anyone else wanting to synchronize on foo cannot do so till the loop exits)
The Los Alamos guys apparently spent something like two months trying to find a data dependence bug in a Fortran loop; I remember this because it was The Demo That Worked by my advisor, long ago (as in, he foolishly offered to try a research tool on a real program, and it actually worked!)
Even something as simple as producers-and-consumers is easy to get wrong, if the wrong thread does the updates.
Now, on the one hand, this is all simple stuff, much lower level than the sort of concurrency that we'd like to be thinking about; on the other hand, this is the sort of thing that skilled people should surely be able to get right, and they don't always.
I think her problem is that her model of programming is sequential as a result of her year of Logo programming. And the world isn't sequential. Think of sports teams. Think of traffic. Think of the internal concurrency in walking. In games. Think about bank account transfers. The Incredible Machine. Cooking. An orchestra. And so on.
Think, in general, of how we massage these things to make them more sequential. Traffic -- we have lanes, we have rules, we have signals. Places like Massachusetts, this is less true, but I find the driving in Massachusetts is less efficient and more stressful than in other places (and to count the dents and dings on cars I see, more error-inducing). If you look at how an orchestra is organized, it is designed to present the problem as a collection of sequential tasks; only the conductor is responsible for the "big picture", and the most difficult music is that which lacks familiar timing cues (like a regular beat -- tiny time steps, again). In addition, the musicians all generally have their own copies of the music (replication -- a common trick for reducing interdependence in concurrent problems) even when playing the same part. Or, consider a marching band -- with certain exceptions (Stanford, Rice) it's time-step city, with each actor running on his own program taking queues from his neighbor (he turns left, two ticks, I turn left). I've seen buggy marching bands, too :-).
You need to redesign from scratch to deal well with concurrency.
It is certainly worth a try, because there are a fair number of problems that are inherently concurrent and are not dealt with well, but I would not bet on success. It might be an excellent place for a somewhat domain-specific language.
"asynchronous - the synchronous messages used in CSP couple the communication of information with the communication of events. With asynchronous messages the two are separated (communication of event is then done with semaphores and similar structures) allowing great flexibility in system design. This also means there are great opportunities for mistakes, so understanding and debugging asynchronous systems is not a trivial task. "
These problems with asynchronous messages don't match my experience. Maybe these problems are completely a consequence of mixing asynchrony and sequential programming.
My working definition for "asynchronous" is "at any time" -- in particular, an "asynchronous interrupt" (or "asynchronous method invocation") could occur at any time during the execution of any other method. This is incredibly problematic from the POV of data structure consistency, even with locks/critical sections. The worst parts of Java can occur "at any time" -- Thread.stop() delivers an exception (in theory) at any point in a another thread's execution, and Thread.suspend() can stop a thread from making progress no matter what it is executing, no matter what locks it holds, etc.
Asynchrony presents a particular problem if you wish to test a program for the presence of bugs (never mind proving the damn thing correct in that sort of a world); in practice, you can never know that you have tested for every possible asynchronous interrupt/invocation that might occur. In a (compiled) Java system, we dealt with this problem (for garbage collection) by defining compiler-inserted safe points where threads are preempted, including by another thread's need for GC. By reducing (drastically) the number of places where the asynchronous event (GC) could occur, we made it actually possible to test that the correct GC bookkeeping was maintained for every single place where a GC could occur. Testing was still tedious and slow, but orders of magnitude less so than testing at every instruction.
There is a further problem in all of this if you are the least bit worried about performance. Communication is expensive. Synchronization is expensive. Adding layers of abstraction can become incredibly expensive if each layer adds its own separate layer of locking -- no longer is it a matter of "just dynamically inline it to make it fast" -- the synchronization itself (on a multiprocessor) can cost an order of magnitude more than the procedure call/return that was inlined. It's interesting to look at how unsynchronized versions of various library classes were added between Java 1.0 and Java 1.1; even with the fastest possible implementations, synchronization is too slow to make it the default.
I realize that I am talking about this at a completely different level, but as a practical matter, we have so far made very little connection between human methods of "computation" and machine methods of "computation". Even if there were a connection, I do not see that just because our brains work in a certain way, it should therefore be good for us to write programs that worked in that same way -- we've got no clear evidence that brains are easy to program, after all, nor is it clear that brain-programs are especially good at solving anything but "human" tasks.
I am much more pragmatic in my goals -- writing parallel programs is generally a very hard thing, especially when those parallel programs are solving large problems, especially when they are supposed to run quickly (for instance, finishing the prediction for tomorrow's weather before tomorrow arrives). There are fundamental performance problems at the machine level (communication/synchronization is expensive) and people do crazy things to get around those fundamental performance problems. A 1000-processor pocket computer does me no good, if the cost of communication remains high, and (and here is where the language research would be interesting) we do not find a good (clear, natural, intuitive) way to write programs that "communicate" only when they actually need to. If message-sending is always communication is always expensive, that's a problem.
And, as a more Squeak-like aside, I would love to play with finite-elements. I am not entirely sure, but I think that FE analysis of a kite is still very hard (the fabric moves, sometimes rapidly). But it would be lots of fun.
David Chase chase@naturalbridge.com drchase@alumni.rice.edu chase@world.std.com
At 21:39 27.10.2001 -0400, David Chase wrote:
At 10:01 AM 10/25/2001 -0700, Ken Kahn wrote:
Thinking sequentially about problems that are inherently concurrent is suboptimal.
I disagree. My favorite examples, not necessarily relevant to Squeak, are: [...] 2) finite element models (game of life, cellular automata).
The problem is *described* from the point-of-view of a single cell, and the parallelism is then added. The differential equation itself describes the behavior *of a point*.
Co-in-ci-dence (I can't believe I'm joining this thread). I have a couple of changesets looming here which describe and implement Conway's Game of Life in terms of StarSqueak (with its inherent if simulated parallelism). Recently (finally) having read Resnick's book, I set out to implement Life with cells that actually have state, such as age. It's one of the slowest Lifes that I've come across (max. 7 gen/s here), but also one of the most interesting ones, because of cell identity. Ever watched a Glider Gun with the cells actually aging in technicolor?
I'll see if I can get it posted over the weekend, Helge
squeak-dev@lists.squeakfoundation.org