Roger Whitney whitney@cs.sdsu.edu wrote: Many places in USA, including where I teach, teach Java without classes. ... Try to do the following simple beginner assignment in Java: Prompt the user for two numbers and print out their sum. OK.
import java.lang.*; import java.io.*;
public class MyFirstProgram { public static void main(String argument[]) { try { Reader r = new BufferedReader(new InputStreamReader(System.in)); StreamTokenizer t = new StreamTokenizer(r);
System.out.print("Enter two numbers: "); if (t.nextToken() != StreamTokenizer.TT_NUMBER) throw new IOException(); double first_number = t.nval; if (t.nextToken() != StreamTokenizer.TT_NUMBER) throw new IOException(); double second_number = t.nval; System.out.println(first_number + second_number); } catch (IOException e) { System.out.println("Oops!"); } } }
You will discover that using the command line you have to read & parse the input character by character and manually convert the input to a number.
This claim at least is totally false. (Amongst other things, Double.valueOf("<some number>") can be used for the conversion.) For my purposes, at any rate, StreamTokenizer is *the* answer to elementary text input in Java. It isn't ideal, but it is _there_.
You also have to deal with exceptions, streams, static etc. Java input is really too complex for raw beginners. This of course I completely agree with. This is why several Java CS1 books come with their own simple I/O package.
None of this explains why it is better to avoid classes. Don't students _wonder_ what Reader is (it's a class) and what kind of thing they find string->number conversion in (they're classes)? Or are you vehemently agreeing that "Java without classes" is like joining the group on the bandwagon so you can ride underneath?
Now try the same problem in Squeak. How many weeks (months) earlier could you assign the problem in a Squeak class than in a Java class? Depends on how much luck you have getting Squeak to run headless, I suppose. The thing _I_ find objectionable about this example is that on a 250MHz UltraSparc I it takes fully three seconds from the time I type RETURN at the end of "java MyFirstProgram" to the time that the prompt appears. (Any Lisp programmers who remember how Lisp was reviled as too big and too slow? Feh!)
Here is a Squeak version:
firstResponse := FillInTheBlankMorph request: 'Type a number'. secondResponse := FillInTheBlankMorph request: 'Type a number'. result := firstResponse asNumber + secondResponse. ^result
No need to run headless. One can use GUI components fairly easily.
I missed the convert methods in Java, so your solution is shorter than mine.
I think the reason objects and classes may be avoided in intro classes is that the people who teach them think objects are too complex for students.
On Thursday, June 27, 2002, at 07:24 PM, Richard A. O'Keefe wrote:
Roger Whitney whitney@cs.sdsu.edu wrote: Many places in USA, including where I teach, teach Java without classes. ... Try to do the following simple beginner assignment in Java:
Prompt the user for two numbers and print out their sum.
OK.
import java.lang.*; import java.io.*; public class MyFirstProgram { public static void main(String argument[]) { try { Reader r = new BufferedReader(new InputStreamReader(System.in)); StreamTokenizer t = new StreamTokenizer(r); System.out.print("Enter two numbers: "); if (t.nextToken() != StreamTokenizer.TT_NUMBER) throw new IOException(); double first_number = t.nval; if (t.nextToken() != StreamTokenizer.TT_NUMBER) throw new IOException(); double second_number = t.nval; System.out.println(first_number + second_number); } catch (IOException e) { System.out.println("Oops!"); }
} }
You will discover that using the command line you have to read & parse the input character by character and manually convert the input to a number.
This claim at least is totally false. (Amongst other things, Double.valueOf("<some number>") can be used for the conversion.) For my purposes, at any rate, StreamTokenizer is *the* answer to elementary text input in Java. It isn't ideal, but it is _there_.
You also have to deal with exceptions, streams, static etc. Java input is really too complex for raw beginners.
This of course I completely agree with. This is why several Java CS1 books come with their own simple I/O package.
None of this explains why it is better to avoid classes. Don't students _wonder_ what Reader is (it's a class) and what kind of thing they find string->number conversion in (they're classes)? Or are you vehemently agreeing that "Java without classes" is like joining the group on the bandwagon so you can ride underneath?
Now try the same problem in Squeak. How many weeks (months) earlier could you assign the problem in a Squeak class than in a Java class?
Depends on how much luck you have getting Squeak to run headless, I suppose.
The thing _I_ find objectionable about this example is that on a 250MHz UltraSparc I it takes fully three seconds from the time I type RETURN at the end of "java MyFirstProgram" to the time that the prompt appears. (Any Lisp programmers who remember how Lisp was reviled as too big and too slow? Feh!)
---- Roger Whitney Mathematical & Computer Sciences Department whitney@cs.sdsu.edu San Diego State University http://www.eli.sdsu.edu/ San Diego, CA 92182-7720 (619) 583-1978 (619) 594-3535 (office) (619) 594-6746 (fax)
On Wednesday, July 3, 2002, at 11:41 PM, Roger Whitney wrote:
I think the reason objects and classes may be avoided in intro classes is that the people who teach them think objects are too complex for students.
I once asked Alan Kay what kind of language/programming should be taught in a first course in a University. (Alan, please do correct me if I'm mis-remembering your answer.) He suggested that students should NOT first be taught object-oriented programming. Object-oriented programming is hard, on purpose, he said. It's meant to produce good, reusable code. That takes some discipline.
That doesn't mean that first-time students shouldn't program WITH objects. E-toys are all about objects, and that makes life easier. But E-toys-using students aren't dealing with class definitions and instance/class distinctions and all those other things that make object-oriented programs high-quality, but are fairly hard things to learn.
Alan's suggestion (which he re-iterated at the MM in CS Ed workshop, http://coweb.cc.gatech.edu/mmworkshop) is that a first programming course should be in multiple languages so that students don't latch on to only one way of thinking. I've heard him propose assembly language and LISP as giving students two ends of a spectrum.
Mark
Mark --
At 9:46 AM -0400 7/4/02, Mark Guzdial wrote:
On Wednesday, July 3, 2002, at 11:41 PM, Roger Whitney wrote:
I think the reason objects and classes may be avoided in intro classes is that the people who teach them think objects are too complex for students.
I once asked Alan Kay what kind of language/programming should be taught in a first course in a University. (Alan, please do correct me if I'm mis-remembering your answer.) He suggested that students should NOT first be taught object-oriented programming. Object-oriented programming is hard, on purpose, he said. It's meant to produce good, reusable code. That takes some discipline.
It must be someone else you are thinking of. I've never taught children anything but OOP. I don't think adults or children who have *never* programmed are challenged in the least by OOP. But the first paradigm that one learns seems to have quite a lasting effect these days. It was easier in the early sixties when I learned to program because there were no orthodox machine or language architectures, and one had to learn at least 20 or so. This helped quite a bit when a new idea came along (it was just another new idea, why not learn it too?) .... By the end of the sixties, all had changed, and data structures and procedures had quite taken over.
That doesn't mean that first-time students shouldn't program WITH objects. E-toys are all about objects, and that makes life easier. But E-toys-using students aren't dealing with class definitions and instance/class distinctions and all those other things that make object-oriented programs high-quality, but are fairly hard things to learn.
Well, I think this is a confusion with "objects" and some of today's "object-oriented systems". In the "Early History of Smalltalk" paper I wrote back in '93, I seem to remember pointing out that the first three principles of objects that came to me in '66: * Everything is an object * Objects communicate by sending messages (which thus must be objects) * Objects can remember (memory thus via objects) had stood the test of time. In fact the most annoying properties of today's Smalltalk/Squeak stem from the parts that haven't been completely objectified. The second comment was that the generalizations of descriptions were constantly in flux, and were thus not yet in some fundamental form. In other words, the various ways to classify was not deemed to be in any above threshold form, and this constituted a real problem yet to be solved for OOP.
For example, ST-72 didn't have subclassing because I didn't like the nonmath way that Simula did it. However, various such schemes could be done by a form of delegation. I think the slight generalization of Simula's class inheritance that ST-76 adopted solved a short term problem pretty well, but ultimately introduced more potential (and now actual) complexity than it was worth. Henry Lieberman wrote a very nice paper years ago about delegation and protoyping. I think this is a better path than today's Smalltalk. But, Henry's solution was also too slippery with regards to how meaning was abstracted. SELF also had some good ideas, but, to me, was too "LISPY" with regard to dynamic changes of meaning. The etoys has its own object model -- basically testing a "universal object mulitple perspective" design -- and will someday find a better notion of generalization than Smalltalk, Actors, SELF, etc. The main thing to learn here is that you can do a hell of a lot with a suitable conception of a universal object made from multiple perspectives, certainly more than enough to get through a one semester course for nonmajors.
So, here's what I think today. If you are going to teach objects at all, then you should teach them right from the beginning, or pretty much forget about it. And the objects should not remotely be abstract data types. Every effort should be made to model things that are more interesting than data (that's pretty much everything in the universe! -- so there is quite a large selection to choose from). If your OOP system shows too much mechanism for beginners, then use it to reshape itself to show beginners what they need to see: a nice combination of simplicity and power. If your object system doesn't readily allow this reshaping, then you should abandon it and find one that does (why teach something that is ultimately not sound?). BTW, this is a good way for students, especially grad students, to learn deep ideas about language and systems. They should be readily be helping to make forms of the language for the beginning classes. For example, I am astounded that folks who teach Squeak in college haven't done a lot more to make an introductory environment that gets beginners quickly into the many media objects in Squeak. This would be analogous to what we did with the etoys for children, but with more range. I don't think that the whole apparatus of the tile scripting is required, but making viewers that really show powerful abstractions of the media objects is really important. Making some kind of extended scripting syntax -- maybe even Python or Lingo-like, if that is the taste -- is a very important part of this. Making methods that easily run in parallel (as in the etoys) is a very good idea, etc. All the apparatus is there in Squeak. Why can't the grad students learn to use it and make some scripting environments that help college students? (Where's the beef here? I mean, this is what grad students used to have to learn. Do they no longer have to know how to make a language that compiles into polish byte codes?) It's not a good excuse to say that the existing Squeak compiler is an overly complicated tangle, because there is no reason not to make a simpler compiler -- even a teaching compiler -- for the purposes of helping beginners that doesn't have as many optimizations -- and in fact, most of the optimizations in Squeak come after the simple parse tree is constructed, etc.
Alan's suggestion (which he re-iterated at the MM in CS Ed workshop, http://coweb.cc.gatech.edu/mmworkshop) is that a first programming course should be in multiple languages so that students don't latch on to only one way of thinking. I've heard him propose assembly language and LISP as giving students two ends of a spectrum.
I do think learning multiple ways to program over one's first year is really important. I don't think any of the current paradigms (including ours) is strong or comprehensive enough to serve solely as the first imprint. But timing is everything here. One way to do this in college would be to have a two semester sequence over a year. Perhaps the idea of the first semester is to get fluent in something pretty darn powerful -- such as a suitably prepared version of Squeak as alluded to above. In the second semester other styles could be explored -- and many of them could be in Squeak as well (but perhaps not all). Perhaps the mix of other styles should start earlier. It would be a very good and important experiment to find out. This experiment would be good for NSFers interesting in making things better to fund. Here is a partial list of styles: * The mapping of simple OOP programming to massively distributed systems is really important. For example, Squeak has a nice implementation of Mitchel Resnick's StarLogo that allows a lot of "heavy zen" to be experienced pretty easily. * For example, the APL "mapping style", which we also find in a recursive form in LISP and in functional languages has some nice benefits. All of these styles are either in Squeak now, or are close to being there. * "Logical variable" programming, as in Prolog, etc., is very important. * Also, the relationship of forward inferencing to event-driven programming is important, and of these to retrieval programming, and of these to certain kinds of coordination matching (such as LINDA). There have been many different versions of this in Squeak, but none smooth enough for college beginners. But what students need is within the range of what a good grad student can do. * "Massively distributed systems futures programming" such as the Dave Reed stuff we have been experimenting with is tremendously interesting. The relationship of this to certain kinds of "possible worlds" AI programming is very interesting. We will put this out to the Squeak list in the Fall, and I think most will find it to be interesting, if not astounding. * I think at the end of the year, it could be very interesting to bootstrap up a simple LISP or Smalltalk from raw machine code and a "fast-track" tool. The whole point of what John McCarthy and Steve Russell did is that great power could come from a rudimentary CPU + a small amount of code with a powerful mathematical architeture. The same goes for Smalltalk, which had a somewhat similar history, with Dan Ingalls playing the central "make the BS real" role. (P.S. 30th anniversary of this in September this year; surely everyone knows how to do this by now ...?)
This stuff is too important to be left to the vendors. I don't think settling for a compromise is good enough in any computer science department that is proud enough to confer degrees. The latter should mean there are real computer scientists among the profs and grad students -- and one of the measures of this is whether they can make the environments they want and need, especially with today's fabulous resources.
Cheers,
Alan
Cheers,
Alan
--
Alan:
On Thu, Jul 04, 2002 at 10:23:55AM -0800, Alan Kay wrote: [big snip]
Alan's suggestion (which he re-iterated at the MM in CS Ed workshop, http://coweb.cc.gatech.edu/mmworkshop) is that a first programming course should be in multiple languages so that students don't latch on to only one way of thinking. I've heard him propose assembly language and LISP as giving students two ends of a spectrum.
I do think learning multiple ways to program over one's first year is really important. I don't think any of the current paradigms (including ours) is strong or comprehensive enough to serve solely as the first imprint. But timing is everything here. One way to do this in college would be to have a two semester sequence over a year. Perhaps the idea of the first semester is to get fluent in something pretty darn powerful -- such as a suitably prepared version of Squeak as alluded to above. In the second semester other styles could be explored -- and many of them could be in Squeak as well (but perhaps not all). Perhaps the mix of other styles should start earlier. It would be a very good and important experiment to find out. This experiment would be good for NSFers interesting in making things better to fund.
What do you think about the following method:
In my first year of university, our "programming fundamentals" course was taught with an imaginary language. There was no compiler for it, it only existed in the professor's mind (and our nightmares, but that's another story). We'd first design everything in this imaginary language, and THEN we'd translate it to another language (In our case, FORTRAN-77...yes, this was in 1990 believe it or not).
I can safely say that NO ONE was attached to either the imaginary language (no one wanted to make a real-world compiler for it), nor FORTRAN-77 (especially!). However, the 'fundamentals' were imparted to us, without any kind of language-favouritism. This was a class that was open to freshmen of all shades, not just CS majors. The second half of this course switched FORTRAN with Pascal.
I am quite sad that absolutely no form of object-oriented design was taught at my particular university; that was something I had to pursue on my own.
Kevin --
There is a lot to be said for "desk-checking" as it used to be called back in the days when one got 5 minutes a day to try a program but only the "operator" could touch the switches. But a lot of the charm of an actual computer comes from it being able to expose bugs quickly. People who survive the early stages of this quickly get the idea that they aren't perfect, and it is OK to simply aspire to and approach perfection incrementally. This seems to be a very good side effect. So I advocate using real computers for beginners right from the word go with an emphasis on the beneficial side-effects of bugs and debugging.
The first real interactive debuggers I know of happened at Lincoln Labs in the 50s. One was called "Flit" after the flyspray, and perhaps was the first. "Programming in the debugger" quickly became a desired activity, and a lot of the early debuggers in the ARPA community could do various versions of this, safe and unsafe. The PILOT system of Teitelmann for LISP in the 60s was the forrunner of the DWIM system he did for INTERLISP (running into the 70s at Xerox PARC). Dan Swinehart, at Stanford AI, and later at PARC, worked on quite a few interactive debuggers you could program in over the years. An early one was RAID, and his later thesis was a very nice design for an Algolic language inspired by Teitelman's PILOT.
All of which goes to say -- especially today -- that one should be able to program more in Squeak's debugger than we can today. I have heard Andreas Raab mumbling about this more than once, so perhaps we shall see something from him one of these days that is much more like what we need.
Cheers,
Alan
---------
At 2:03 PM -0400 7/4/02, Kevin Fisher wrote:
Alan:
On Thu, Jul 04, 2002 at 10:23:55AM -0800, Alan Kay wrote: [big snip]
Alan's suggestion (which he re-iterated at the MM in CS Ed workshop, http://coweb.cc.gatech.edu/mmworkshop) is that a first programming course should be in multiple languages so that students don't latch on to only one way of thinking. I've heard him propose assembly language and LISP as giving students two ends of a spectrum.
I do think learning multiple ways to program over one's first year is really important. I don't think any of the current paradigms (including ours) is strong or comprehensive enough to serve solely as the first imprint. But timing is everything here. One way to do this in college would be to have a two semester sequence over a year. Perhaps the idea of the first semester is to get fluent in something pretty darn powerful -- such as a suitably prepared version of Squeak as alluded to above. In the second semester other styles could be explored -- and many of them could be in Squeak as well (but perhaps not all). Perhaps the mix of other styles should start earlier. It would be a very good and important experiment to find out. This experiment would be good for NSFers interesting in making things better to fund.
What do you think about the following method:
In my first year of university, our "programming fundamentals" course was taught with an imaginary language. There was no compiler for it, it only existed in the professor's mind (and our nightmares, but that's another story). We'd first design everything in this imaginary language, and THEN we'd translate it to another language (In our case, FORTRAN-77...yes, this was in 1990 believe it or not).
I can safely say that NO ONE was attached to either the imaginary language (no one wanted to make a real-world compiler for it), nor FORTRAN-77 (especially!). However, the 'fundamentals' were imparted to us, without any kind of language-favouritism. This was a class that was open to freshmen of all shades, not just CS majors. The second half of this course switched FORTRAN with Pascal.
I am quite sad that absolutely no form of object-oriented design was taught at my particular university; that was something I had to pursue on my own.
--
On Thursday, July 4, 2002, at 02:03 PM, Kevin Fisher wrote:
In my first year of university, our "programming fundamentals" course was taught with an imaginary language. There was no compiler for it, it only existed in the professor's mind (and our nightmares, but that's another story). We'd first design everything in this imaginary language, and THEN we'd translate it to another language (In our case, FORTRAN-77...yes, this was in 1990 believe it or not).
We did this for several (many) years at Georgia Tech. The problem was motivation. How motivating is it to write lines of code that no one ever sees executes? How can one leverage students' interest in computational media (including video games) if the programs are always imaginary?
Mark
Well, I always thought of this method as kind of "reverse-psychology" in a way...although the original design was "imaginary", they had us impliment the design in FORTRAN-77. Either way, the "frustration" of wanting to do more actually drove students to re-impliment their assignments in other languages.
I think a typical lab assignment was usually filled with "hey, I can do it THIS way in Pascal," or "Blasted FORTRAN! I this is how I could do it in C instead!" I'm not sure if this was the true goal of the instructor, however...
On Fri, Jul 05, 2002 at 10:55:18AM -0400, Mark Guzdial wrote: [snip]
We did this for several (many) years at Georgia Tech. The problem was motivation. How motivating is it to write lines of code that no one ever sees executes? How can one leverage students' interest in computational media (including video games) if the programs are always imaginary?
Mark
Hi Alan,
<snip>
Well, I think this is a confusion with "objects" and some of today's "object-oriented systems". In the "Early History of Smalltalk" paper I wrote back in '93, I seem to remember pointing out that the first three principles of objects that came to me in '66:
- Everything is an object
- Objects communicate by sending messages (which thus must be objects)
- Objects can remember (memory thus via objects)
had stood the test of time. In fact the most annoying properties of today's Smalltalk/Squeak stem from the parts that haven't been completely objectified.
I was just wondering which bits you find the most annoying/least objectified in todays Smalltalk/Squeak?
Cheers
Mike
p.s.
I think your comments on the lasting effect of the first paradigm one learns and your early experience with learning so many different languages very interesting.
There was a thread recently discussing Python and it became quite thought provoking but I didn't pipe up at the time -
I find it interesting that different language communities each define, implement, and to different degrees defend, their own versions of 'OO'. I find that some people who learn OO through one language can be left with a lasting understanding that is at odds with others who have learnt in a different language/environment. I find this particularly with respect to paradigms and the implementation of design abstractions.
I don't wish to appear critical with any particular language but I never manage to resolve some of my following thoughts when I follow discussions on how best to teach objects, and software more generally, to people :-
Python, for example, lets you write (and think) in three paradigms - procedural, functional and object oriented. I had no understanding of functional programming before I met Python and it provided a neat way for me to learn totally new ideas. Deep within a complex systems programming task, however, I found it difficult to understand code that constantly switched between all three paradigms. It evolved in this state, because this was the favoured style. It's not necessarily right or wrong but prompted me to think about the differences between languaes that provide power through simplicity and languages that provide power through features; and how a paradigm, or a number of paradigms is presented and offered for use.
Moving from the paradigm to the implementation of it in a specific language I find lots more juicy things to think about. Reuse through templates in C++, for example. Also, different languages provision of object references and how they can be passed around. The difference between static and dynamic typing. Casting!!! I won't go on.
I think I'm just tying to illustrate that the lasting impression of the object paradigm, if you could only experience one of (for example) Java, C++, Python, Delphi, Smalltalk is going to be different, possibly not the best impression, and thus undesirable. It's only by taking a meandering path through these languages (and more) that I'm maybe beginning to appreciate the qualities of the paradigm rather than that of the language. I started thinking down these lines when you mentioned that you had experienced lots of different ideas early on and after all, they're just ideas. I think that maybe these days, with many languages, a lot gets in the way of the fundamental 'ideas' and to see them you have to experience many languages to extract the gems. I wouldn't wish some of my language experiences on anyone :-)
Am I right in thinking that one could provide an object environment to learn in that removes the emphasis from learning the language to squarely learning the paradigm - and thus not leave an inappropriate lasting impression? I guess this is what you have done with E-toys but I hadn't thought about extrapolating that style of environment to (i'm not sure of the right word) an 'older' or maybe more 'involved' audience until you expressed the ideas for media-rich learning environments.
Is the problem therefore that by teaching the object paradigm through a specific language you essentially distort the paradigm to its expression and realisation in the language? Only by providing an environment that allows the understanding of the difference between the paradigm and the technique(s) of modeling in the paradigm can you fully appreciate the difference and thus benefit the most from the experience?
One can then, if desired, go on to play in many different languages - and to some extent discover the realities of how some communities view 'objects'!
Mike
Michael --
At 1:47 AM +0000 7/5/02, Michael Roberts wrote:
Hi Alan,
<snip> > Well, I think this is a confusion with "objects" and some of today's > "object-oriented systems". In the "Early History of Smalltalk" paper > I wrote back in '93, I seem to remember pointing out that the first > three principles of objects that came to me in '66: > * Everything is an object > * Objects communicate by sending messages (which thus must be objects) > * Objects can remember (memory thus via objects) > had stood the test of time. In fact the most annoying properties of > today's Smalltalk/Squeak stem from the parts that haven't been > completely objectified.
I was just wondering which bits you find the most annoying/least objectified in todays Smalltalk/Squeak?
This is about today's version of Smalltalk/Squeak -- some past versions have done some of these ... On this list, I've mentioned several times that variables and instances should be objectified. This has been done in several other systems to provide much better leverage on a number of important problems. Messages and message sending could be more objectified. This would allow a much more flexible approach to trying delegation schemes, finding alternatives to inheritance, etc. Another larger area that I'm astounded that no grad student has done a thesis on, would be to take the "Smalltalk VM in itself" that we use for bootstrapping, and make it into a real OOP model itself. A large part of this is just to restructure the messy workable thing we have now. Methods could stand to be more objectified. I'd like to see methods that have a vertical bar down the middle. On the left would be the "reference code": the simplest code that does what the method should do. On the right would be the "pragmatic code": a bunch of cases for running the meaning efficiently. You should be able to run both sides when debugging, etc. Another method objectification would be to have many polymorphic methods be instances of a class that guards the meaning of the polymorphism. This is part of having a model for both variables and "slots", etc.
One way to play with this would be to just take the three "principles" above and make a complete model. This is pretty easy. The important thing that CLOS (e.g.) did was to make the additions to the models compilable so that metachanges could be as efficient as the kernel. We kind of have this with SLANG, but the whole apparatus could be much much cleaner.
Cheers,
Alan
Cheers
Mike
p.s.
I think your comments on the lasting effect of the first paradigm one learns and your early experience with learning so many different languages very interesting.
There was a thread recently discussing Python and it became quite thought provoking but I didn't pipe up at the time -
I find it interesting that different language communities each define, implement, and to different degrees defend, their own versions of 'OO'. I find that some people who learn OO through one language can be left with a lasting understanding that is at odds with others who have learnt in a different language/environment. I find this particularly with respect to paradigms and the implementation of design abstractions.
I don't wish to appear critical with any particular language but I never manage to resolve some of my following thoughts when I follow discussions on how best to teach objects, and software more generally, to people :-
Python, for example, lets you write (and think) in three paradigms - procedural, functional and object oriented. I had no understanding of functional programming before I met Python and it provided a neat way for me to learn totally new ideas. Deep within a complex systems programming task, however, I found it difficult to understand code that constantly switched between all three paradigms. It evolved in this state, because this was the favoured style. It's not necessarily right or wrong but prompted me to think about the differences between languaes that provide power through simplicity and languages that provide power through features; and how a paradigm, or a number of paradigms is presented and offered for use.
Moving from the paradigm to the implementation of it in a specific language I find lots more juicy things to think about. Reuse through templates in C++, for example. Also, different languages provision of object references and how they can be passed around. The difference between static and dynamic typing. Casting!!! I won't go on.
I think I'm just tying to illustrate that the lasting impression of the object paradigm, if you could only experience one of (for example) Java, C++, Python, Delphi, Smalltalk is going to be different, possibly not the best impression, and thus undesirable. It's only by taking a meandering path through these languages (and more) that I'm maybe beginning to appreciate the qualities of the paradigm rather than that of the language. I started thinking down these lines when you mentioned that you had experienced lots of different ideas early on and after all, they're just ideas. I think that maybe these days, with many languages, a lot gets in the way of the fundamental 'ideas' and to see them you have to experience many languages to extract the gems. I wouldn't wish some of my language experiences on anyone :-)
Am I right in thinking that one could provide an object environment to learn in that removes the emphasis from learning the language to squarely learning the paradigm - and thus not leave an inappropriate lasting impression? I guess this is what you have done with E-toys but I hadn't thought about extrapolating that style of environment to (i'm not sure of the right word) an 'older' or maybe more 'involved' audience until you expressed the ideas for media-rich learning environments.
Is the problem therefore that by teaching the object paradigm through a specific language you essentially distort the paradigm to its expression and realisation in the language? Only by providing an environment that allows the understanding of the difference between the paradigm and the technique(s) of modeling in the paradigm can you fully appreciate the difference and thus benefit the most from the experience?
One can then, if desired, go on to play in many different languages
- and to some extent discover the realities of how some communities
view 'objects'!
Mike
--
Alan Kay:
On this list, I've mentioned several times that variables and
instances should be objectified. This has been done in several other systems to provide much better leverage on a number of important problems.
Could you say more on those systems? What does it mean to objectify (reify?) a variable? In a sense it is already reified if we understand it as an *association* name-value. Pragmatically it is very different from a "typical object", and its status depends very strongly on the programming language. A logical variable in Prolog which can exist as 'uninstantiated' is a meaningful example. A variable in FORTH is an object in the sense that the user has to demand explicitly (send a message if you wish) that it be dereferenced. But this is rather cumbersome... So, please, what do you really want, could you be more specific?
Messages and message sending could be more objectified. This would allow a much more flexible approach to trying delegation schemes, finding alternatives to inheritance, etc.
Trivially messages are objects, but it is difficult to do something with them, apart from sending them... In what sense the message *sending* should be objectified?
From time to time I read here and there (and I say it myself, albeit
without much conviction...) that OO subsumes - in a sense - the functional programming. A closure is an object which 'can be called', or can be 'applied' to other objects (and - of course - it is practically done in C++ or Python by ovrloading the () methods). But: what about iteration as tail recursion? Can it be formulated seriously in a OO framework? How can we understand such a phrase "the *execution* of a program should be objectified"?
Finding alternatives to inheritance is something which bothers me for several years... What I find particularly refreshing is the decoupling between classes and types, as formulated through the 'type class' systems of such functional languages as Haskell or Clean. (BTW. the classes therein are much more close to mathematical categories than classes in any other OO system). We see now the appearance of "polytypic programs", the 'generic Haskell' project etc. - all that enhances coinsiderably the polymorphism, and the reuse of common programming patterns. Perhaps the Smalltalking people could have a look on all that?
It is obvious that in many programming domains we need much more than classical inheritance. The computer algebra (in a modern framework - not as a mangling of symbolic, syntactic terms, but as processing of complex mathematical structures) issues is a typical example. We need *subsumptions* ((e.g. while we can define objects belonging to well known mathematical hierarchies: AdditiveGroups, Rings, Fields, VectorSpaces etc., it is not so easy to implement gracefully such properties as that all additive groups are modules over integers (because if you can add x+x, you have "for free" 2*x...), that a ring of congruences becomes automatically a field when the modulus is a prime number, etc.)) All this requires (IMHO!) that the Meta level be treated very, very seriously...
Another method objectification would be to have many polymorphic
methods be instances of a class that guards the meaning of the polymorphism. This is part of having a model for both variables and "slots", etc.
Once more, in the functional programming realm it is a major issue. But there, classes and types are animals of different races. How to choose the best from both worlds?
Thank you.
Jerzy Karczmarczuk Caen, France
On Friday 05 July 2002 13:27, Alan Kay wrote:
On this list, I've mentioned several times that variables and
instances should be objectified. This has been done in several other systems to provide much better leverage on a number of important problems.
I don't know what you mean by objectified instances, but had an interesting idea on how to deal with variables. I borrowed the term "slices" from ZX81 Basic, but the inspiration was mostly from APL.
A slice would be related to a part of an object (or a parts of several objects) in a similar way that a stream is related to its collection. So if I create
s := ArraySlice on: #(6 5 4 3 2 1 0) from: 2 to: 4
then I could do something like
s put: 9
and have #(6 9 9 9 2 1 0) as a result. This might also be interesting
v := VariableSlice on: 7@8 named: #y
And we should be able to combine slices in various neat ways. The original idea was more as a GUI level device and David Ungar has recently partly implemented this in Self. There is a little more about slices at http://www.lsi.usp.br/~jecel/tech.html
Messages and message sending could be more objectified. This
would allow a much more flexible approach to trying delegation schemes, finding alternatives to inheritance, etc. Another larger area that I'm astounded that no grad student has done a thesis on, would be to take the "Smalltalk VM in itself" that we use for bootstrapping, and make it into a real OOP model itself. A large part of this is just to restructure the messy workable thing we have now.
My Ecoop95 paper (http://www.lsi.usp.br/~jecel/jabs7.html with a later version published as a chapter in a book) was exactly about this. If you find that grad student in the next few months, have him (is it ever a "her" in this field?) talk to me.
Methods could stand to be more objectified. I'd like to see
methods that have a vertical bar down the middle. On the left would be the "reference code": the simplest code that does what the method should do. On the right would be the "pragmatic code": a bunch of cases for running the meaning efficiently. You should be able to run both sides when debugging, etc.
One big difference between the reference code and the pragmatic code is the cacheing of results. Up to a certain point it would be possible to transform the former into the latter by adding annotations for a "cache manager". I have more details about this in the same page that talks about slices.
Another method objectification would be to have many polymorphic
methods be instances of a class that guards the meaning of the polymorphism. This is part of having a model for both variables and "slots", etc.
In the CoDA reflective Smalltalk system, this would be handled by replacing an object's default Protocol meta-object.
One way to play with this would be to just take the three
"principles" above and make a complete model. This is pretty easy. The important thing that CLOS (e.g.) did was to make the additions to the models compilable so that metachanges could be as efficient as the kernel. We kind of have this with SLANG, but the whole apparatus could be much much cleaner.
A Jitter in Smalltalk would do the job.
-- Jecel
On Mon, 8 Jul 2002, Jecel Assumpcao Jr wrote:
On Friday 05 July 2002 13:27, Alan Kay wrote:
The important thing that CLOS (e.g.) did was to make the additions to the models compilable so that metachanges could be as efficient as the kernel. We kind of have this with SLANG, but the whole apparatus could be much much cleaner.
A Jitter in Smalltalk would do the job.
I think two jitters would be simpler (and do a much better job: the right tool at the right level of abstraction for the particular problem in hand).
The first is just the regular Deutschian dynamic translation for Smalltalk (with tagged pointers, message sends, GC, and all the other stuff that makes the job tricky). This one isn't very interesting in a MOP context.
The second is intended only for MOP methods. It makes the same kinds of assumptions about the bytecode as does the existing Slang translator. In fact, it's just a Slang CompiledMethod (rather than MethodNode) -> C translator that goes one step further and generates binary off the translated method. No objects, no message sends, everything is an int and the generated code is blindingly fast. A runtime bytecode assembler. (Any half-serious application should be equipped with its own built-in dynamic C compiler. [And any half-serious OS will have one "on tap" for all applications, or the kernel, to use.])
In such a restricted context, beating the performance of an optimising C compiler isn't particularly difficult (aggressively optimising the Slang bytecode at the rate of several thousand Slang methods per second on modest hardware).
The mechanism is almost all in place already in Squeak (with the named primitives). All that's missing is the primitive lookup to notice that the method starts with <slang> (instead of primitive:module:) and to fire up the Slang compiler to generate the binary. Drop the entry point into the primitive table and off you go. Beyond that we only need a translator from Slang bytecode to binary, which is *way* simpler than a "real" jitter. A week, maybe two, tops.
(Didn't Johnson's mob at one time or other attempt something similar using some kind of horrendous RTL for expressing primitive methods? Claus does it in Smalltalk/X too, except he sticks real C source in his primitive methods and then spawns an external C compiler to build a .so. I suspect that makes self-modifying MOPs rather unrealistic in Smalltalk/X, and certainly less introspective given that their "source" format isn't formally structured.)
Objectifying the Interpreter is a great idea (I've thought for a long time that the Interpreter should really be generated out of the Context simulation code) but should be coded as carefully as the current Interpreter to ensure that the two universes (where in the MOP everything is an int and objects don't exist) remain compatible. If we had that then redefining the meaning of message send (or whatever) should not lose a single percentage point in performance compared to static (optimising) compilation of translated code (and in fact may well gain performance compared to overly-conservative C compilers).
Ian
PS: A language called Slang has existed for years, since long before Squeak came along. Stealing the name was maybe not very polite.
Hi Ian,
The important thing that CLOS (e.g.) did was to make the additions to the models compilable so that metachanges could be as efficient as the kernel. We kind of have this with SLANG, but the whole apparatus could be much much cleaner.
A Jitter in Smalltalk would do the job.
I think two jitters would be simpler (and do a much better job: the right tool at the right level of abstraction for the particular problem in hand).
... ... ...
Is this a hint that we are going to have 2 Jitter plugins for 3.3 VM ;-)
The mechanism is almost all in place already in Squeak (with the named primitives). All that's missing is the primitive lookup to notice that the method starts with <slang> (instead of primitive:module:) and to fire up the Slang compiler to generate the binary. Drop the entry point into the primitive table and off you go. Beyond that we only need a translator from Slang bytecode to binary, which is *way* simpler than a "real" jitter. A week, maybe two, tops.
Please go for it, Ian. I beg you. :-)
After I split the traditional 'VM' into 'InterpreterPlugin', 'ObjectMemoryPlugin' and PrimitivesPlugin' I lost somewhere around 5% in performance.
I am desperate for some faster cycles, hoping to get them from Anthony and Scott optimisation and your Jitters.
Cheers,
PhiHo.
I wrote:
After I split the traditional 'VM' into 'InterpreterPlugin',
'ObjectMemoryPlugin' and PrimitivesPlugin' I lost somewhere around 5% in performance.
I am desperate for some faster cycles, hoping to get them from Anthony
and Scott optimisation and your Jitters.
I hope that Anthony, Scott and especially Ian don't take what I wrote as to imply that all of their efforts just added up to 5% gain.
I think it is obvious but, well, it's rather safe than sorry. :-)
Cheers,
PhiHo.
----- Original Message ----- From: "PhiHo Hoang" phiho.hoang@rogers.com To: squeak-dev@lists.squeakfoundation.org Sent: Tuesday, July 09, 2002 11:15 AM Subject: Jitters (was Re: objectification (was: Jython vs Squeak for teaching multimedia))
Hi Ian,
The important thing that CLOS (e.g.) did was to make the additions
to
the models compilable so that metachanges could be as efficient as the kernel. We kind of have this with SLANG, but the whole apparatus could be much much cleaner.
A Jitter in Smalltalk would do the job.
I think two jitters would be simpler (and do a much better job: the
right
tool at the right level of abstraction for the particular problem in hand).
... ... ...
Is this a hint that we are going to have 2 Jitter plugins for 3.3 VM
;-)
The mechanism is almost all in place already in Squeak (with the named primitives). All that's missing is the primitive lookup to notice that the method starts with <slang> (instead of primitive:module:) and to fire up the Slang compiler to generate the binary. Drop the entry point into the primitive table and off you go. Beyond that we only need a translator from Slang bytecode to binary, which is *way* simpler than a "real" jitter. A week, maybe two, tops.
Please go for it, Ian. I beg you. :-) After I split the traditional 'VM' into 'InterpreterPlugin',
'ObjectMemoryPlugin' and PrimitivesPlugin' I lost somewhere around 5% in performance.
I am desperate for some faster cycles, hoping to get them from Anthony
and Scott optimisation and your Jitters.
Cheers, PhiHo.
On Monday 08 July 2002 23:31, Ian Piumarta wrote:
A Jitter in Smalltalk would do the job.
I think two jitters would be simpler (and do a much better job: the right tool at the right level of abstraction for the particular problem in hand).
The first is just the regular Deutschian dynamic translation for Smalltalk (with tagged pointers, message sends, GC, and all the other stuff that makes the job tricky). This one isn't very interesting in a MOP context.
The second is intended only for MOP methods. [...]
I don't get it - why isn't full Smalltalk the right level of abstraction for MOP methods? Yes, it is extremely tricky to bootstrap. But having a full Jitter at hand I would use it for everything.
Of course, I am supposing we have Self-style optimizations such as inlining and customized compilation. If we don't then a Slang-like MOP compiler as you suggested is the only practical option. But look at all the excitement that the StrongTalk release has generated - surely any new design would include everything we have learned?
-- Jecel
FYI:
SmallScript has array-slice is built into the language as a user definable message form. It also supports interior pointers to allow one objects structure to refer to some aspect of another objects structure.
In SmallScript you can just write:
Eval [ |a| := {6,5,4,3,2,1,0}.
"" whatever you want here
"" later pull out a slice |s| := a[2:4].
"" update a slice a[3:5] := 9. ]
You can directly define your own array-slice methods on any class, for any purpose, scoped to any namespace, you desire.
This includes creating multi-dimensional arrays, etc.
By default, using the intrinsic message aliasing services, the #[<>:<>]() array-slice form is mapped to copyFrom:to:.
Method [ [a:b]() "slice body here" ]
Method [ [a:b](setValue) "slice body here" ]
Method [ [dim1][dim2][dim3A:dim3B]() "slice body here" ]
Method [ [dim1][dim2][dim3A:dim3B](setValue) "slice body here" ]
You can combine this with var-arg mechanism to provide arbitrary indices, etc. Which enables you to write one method and have it handle many cases.
SmallScript also supports optional typing, multimethods, and value-type structs allowing you to create real typed c-struct arrays like arrays of floats, etc.
Class name: Foo fields: struct Float64 m[10]. Eval [ |f| := Foo new. f.m[2:7] := Float.pi. ]
-- Dave S. [SmallScript Corp]
SmallScript for the AOS & .NET Platforms David.Simmons@SmallScript.com | http://www.smallscript.org
-----Original Message----- From: squeak-dev-admin@lists.squeakfoundation.org [mailto:squeak-dev- admin@lists.squeakfoundation.org] On Behalf Of Jecel Assumpcao Jr Sent: Monday, July 08, 2002 8:39 PM To: squeak-dev@lists.squeakfoundation.org Subject: objectification (was: Jython vs Squeak for teaching
multimedia)
On Friday 05 July 2002 13:27, Alan Kay wrote:
On this list, I've mentioned several times that variables and
instances should be objectified. This has been done in several other systems to provide much better leverage on a number of important problems.
I don't know what you mean by objectified instances, but had an interesting idea on how to deal with variables. I borrowed the term "slices" from ZX81 Basic, but the inspiration was mostly from APL.
A slice would be related to a part of an object (or a parts of several objects) in a similar way that a stream is related to its collection. So if I create
s := ArraySlice on: #(6 5 4 3 2 1 0) from: 2 to: 4
then I could do something like
s put: 9
and have #(6 9 9 9 2 1 0) as a result. This might also be interesting
v := VariableSlice on: 7@8 named: #y
And we should be able to combine slices in various neat ways. The original idea was more as a GUI level device and David Ungar has recently partly implemented this in Self. There is a little more about slices at http://www.lsi.usp.br/~jecel/tech.html
Messages and message sending could be more objectified. This
would allow a much more flexible approach to trying delegation schemes, finding alternatives to inheritance, etc. Another larger area that I'm astounded that no grad student
has
done a thesis on, would be to take the "Smalltalk VM in itself" that we use for bootstrapping, and make it into a real OOP model itself.
A
large part of this is just to restructure the messy workable thing
we
have now.
My Ecoop95 paper (http://www.lsi.usp.br/~jecel/jabs7.html with a later version published as a chapter in a book) was exactly about this. If you find that grad student in the next few months, have him (is it
ever
a "her" in this field?) talk to me.
Methods could stand to be more objectified. I'd like to see
methods that have a vertical bar down the middle. On the left would be the "reference code": the simplest code that does what the method should do. On the right would be the "pragmatic code": a bunch of cases for running the meaning efficiently. You should be able to run both sides when debugging, etc.
One big difference between the reference code and the pragmatic code
is
the cacheing of results. Up to a certain point it would be possible to transform the former into the latter by adding annotations for a
"cache
manager". I have more details about this in the same page that talks about slices.
Another method objectification would be to have many
polymorphic
methods be instances of a class that guards the meaning of the polymorphism. This is part of having a model for both variables and "slots", etc.
In the CoDA reflective Smalltalk system, this would be handled by replacing an object's default Protocol meta-object.
One way to play with this would be to just take the three
"principles" above and make a complete model. This is pretty easy. The important thing that CLOS (e.g.) did was to make the additions
to
the models compilable so that metachanges could be as efficient as the kernel. We kind of have this with SLANG, but the whole apparatus could be much much cleaner.
A Jitter in Smalltalk would do the job.
-- Jecel
On Tuesday 09 July 2002 04:33, David Simmons wrote:
[lots of neat slice examples in SmallScript]
That's great! It looks far better than what I had proposed, but isn't as "objectified". In a scripting environment (what I call a "blueprint system") syntax is a major consideration and you did a really nice job.
My own interest is in what I call "living systems" (which have persistent, hand crafted objects) and so I don't worry about syntax too much - the programming environment can hide a lot of the ugliness.
Cheers, -- Jecel
-----Original Message----- From: squeak-dev-admin@lists.squeakfoundation.org [mailto:squeak-dev- admin@lists.squeakfoundation.org] On Behalf Of Jecel Assumpcao Jr Sent: Tuesday, July 09, 2002 12:35 PM To: squeak-dev@lists.squeakfoundation.org Subject: slices (was: objectification)
On Tuesday 09 July 2002 04:33, David Simmons wrote:
[lots of neat slice examples in SmallScript]
That's great! It looks far better than what I had proposed, but isn't
as
"objectified". In a scripting environment (what I call a "blueprint system") syntax is a major consideration and you did a really nice
job.
My own interest is in what I call "living systems" (which have persistent, hand crafted objects) and so I don't worry about syntax
too
much - the programming environment can hide a lot of the ugliness.
Ok. You've certainly peeked my curiosity. Let's establish a little common ground for talking slice features in scripting applications here...
a) Are you familiar with the (Smalltalk-style-variant) F-Script [http://www.fscript.org/] on the MacOS?
b) Python (or) Ruby slices? ===
I thought/think that the SmallScript slice form is fully objectified. Although probably presented too little to clarify the point.
For slices to be objectified (to me) means that:
1. The slice operations need to be an ordinary message form. Where the dimensions and parameters can have any designed arity (arg-count) just like keyword-messages. Thus any class can define its own meaning for the slice message.
2. The language allows for var-arg (aka optional-args). So that we can define a generic slice message that is the "default" handler for all higher-arity (higher-arg) forms. But, since it is only a default, we can override it with explicit forms where we want to. This allows us to have just one method handle everything for a given class if we want to.
3. We have mixins/interfaces so we can define all the array-slice message behavior/forms in a single mixin with "concrete" methods that implement all the behavior. This allows us to mixin in that implementation with any class we want to have support array-slice protocol -- without needing to write an methods on that class.
4. We can define new classes at will. So we are free to define an <ArraySlice> class that represents the result of sending a array-slice message to a given class that chooses to construct one and return it.
5. We allow optional typing with multi-method overloading. Which means that we can specialize messages with the "same" name, but writing versions of them which differ "only" in their argument types. Thus we can write array-slice methods that accept other slices, or values, etc.
Class name: ArraySlice extends: IndexedCollection { ... "presume we have written appropriate protocol" }
Method behavior: IndexedCollection [ [a:b]() ^ArraySlice on: self start: a end: b ]
Method behavior: IndexedCollection [<'cloning'> [<ArraySlice>slice]() ^ArraySlice on: self start: slice.start end: slice.end ]
Eval [ |slice| := {3,4,5}[2:3].
|slice2| := someCollection[slice].
"or equivalently, since #::[<>]() is aliased to #at:" slice2 := someCollection at: slice. ] ==================
So, now the question.
Does this cover the "objectification" were you looking for?
Making slice facilities useful in SmallScript is important. Understanding what you are looking for helps.
P.S., SmallScript's selector namespace modularization allows more than one slice system to coexist transparently in a single running program/image.
-- Dave S. [SmallScript Corp]
SmallScript for the AOS & .NET Platforms David.Simmons@SmallScript.com | http://www.smallscript.org
Cheers, -- Jecel
On Wednesday 10 July 2002 04:51, David Simmons wrote:
Ok. You've certainly peeked my curiosity. Let's establish a little common ground for talking slice features in scripting applications here...
a) Are you familiar with the (Smalltalk-style-variant) F-Script [http://www.fscript.org/] on the MacOS?
No.... wait... ok, the @ thing is exactly what I was talking about. I might want to send a message to all elements of a slice (color: 'blue', for example) and F-script can do this and far more.
One difference between my idea for slices and what they did is that I wanted to keep track of an object's origin like this
a := #(1 2 3 4). b := #(5 6 7 8). |s| := a[2:3],b[2:3]. "concatenation of two slices" s := 9. "now a is #(1 9 9 4) and b is #(5 9 9 8)"
Perhaps this is overkill?
b) Python (or) Ruby slices?
Nope.... ok, Python looks like Lua in this regard.... Ruby's slice! to delete things is interesting. A little better than ZX81 Basic, of course, but F-Script is more like what I was looking for.
I thought/think that the SmallScript slice form is fully objectified. Although probably presented too little to clarify the point.
For slices to be objectified (to me) means that:
- The slice operations need to be an ordinary message form. Where
the dimensions and parameters can have any designed arity (arg-count) just like keyword-messages. Thus any class can define its own meaning for the slice message.
Yes, this is what you showed in your previous message.
- The language allows for var-arg (aka optional-args). So that we
can define a generic slice message that is the "default" handler for all higher-arity (higher-arg) forms. But, since it is only a default, we can override it with explicit forms where we want to. This allows us to have just one method handle everything for a given class if we want to.
Ok, lot's of people hate #value:value:value: as well ;-)
- We have mixins/interfaces so we can define all the array-slice
message behavior/forms in a single mixin with "concrete" methods that implement all the behavior. This allows us to mixin in that implementation with any class we want to have support array-slice protocol -- without needing to write an methods on that class.
Great!
- We can define new classes at will. So we are free to define an
<ArraySlice> class that represents the result of sending a array-slice message to a given class that chooses to construct one and return it.
This was what I meant by "objectified". Since you only defined methods but not classes in your previous examples I thought you didn't have this capability.
- We allow optional typing with multi-method overloading. Which
means that we can specialize messages with the "same" name, but writing versions of them which differ "only" in their argument types. Thus we can write array-slice methods that accept other slices, or values, etc.
There are some people trying to build prototype based language (slightly inspired on Cecil) with this as their central concept:
http://tunes.org/~eihrul/pmd.pdf
Class name: ArraySlice extends: IndexedCollection { ... "presume we have written appropriate protocol" }
Method behavior: IndexedCollection [ [a:b]() ^ArraySlice on: self start: a end: b ]
Right, if you can define a method for the notation there is no reason why the method can't create and return a new object. Certainly if my idea was possible in regular Smalltalk it would be even simpler in SmallScript. I simply didn't think that this was the style you wanted.
Method behavior: IndexedCollection [<'cloning'> [<ArraySlice>slice]() ^ArraySlice on: self start: slice.start end: slice.end ]
Eh? Ok, this makes sense unless you come from an APL (or F-script, it seems) background. In that case you might want instead
^ArraySlice on: self withIndexes: slice elements
Eval [
|slice| := {3,4,5}[2:3]. | |slice2| := someCollection[slice].
"or equivalently, since #::[<>]() is aliased to #at:" slice2 := someCollection at: slice. ]
This would be the same as someCollection[2:3], right? My idea would result in someCollection[4,5]. I am not sure which is less confusing.
Does this cover the "objectification" were you looking for?
Making slice facilities useful in SmallScript is important. Understanding what you are looking for helps.
By defining classes you can certainly do all that I imagined. The slice objects can also act as proxies and resend messages to all their elements, which is one thing that I want to do a lot.
My interest is exposing this at the GUI level. So slices certainly must be an object if they are to have a visual representation. Then you can create a slice on all even numbered headers in a text and then send the message #bold to them. Oh, I know that the "right" way to do this is to structure the objects so there would be one style object representing all even numbered headers and then we could send #bold to it. But in practice no matter how we organize things we will need to talk about collections of things that don't fit into our plans. So I want slices.
P.S., SmallScript's selector namespace modularization allows more than one slice system to coexist transparently in a single running program/image.
I just looked at the squeak modules list archive and it seems that this kind of thing will have to be solved here as well.
-- Jecel
On Thursday, July 4, 2002, at 02:23 PM, Alan Kay wrote:
It must be someone else you are thinking of. I've never taught children anything but OOP. I don't think adults or children who have *never* programmed are challenged in the least by OOP.
Thanks, Alan -- I did misinterpret you. My apologies.
Well, I think this is a confusion with "objects" and some of today's "object-oriented systems".
That's probably where I stumbled -- it's not the "objects" but the current, common "systems" that are at the wrong level for an introductory University course. Thanks!
For example, I am astounded that folks who teach Squeak in college
haven't done a lot more to make an introductory environment that gets beginners quickly into the many media objects in Squeak. This would be analogous to what we did with the etoys for children, but with more range.
I think the problems are skills, range, and time. It's rare that one finds in the same person(s) the skills at developing classes/curricula, understanding novice programmers and the learning issues, AND implementing introductory programming environments. And even if one can assemble a team with the skills to do all of this, it's a large effort and takes a lot of time. It doesn't HAVE to be a huge effort, but I think our design processes for educational software are young yet.
Mark
Mark --
At 11:01 AM -0400 7/5/02, Mark Guzdial wrote:
On Thursday, July 4, 2002, at 02:23 PM, Alan Kay wrote:
It must be someone else you are thinking of. I've never taught children anything but OOP. I don't think adults or children who have *never* programmed are challenged in the least by OOP.
Thanks, Alan -- I did misinterpret you. My apologies.
Well, I think this is a confusion with "objects" and some of today's "object-oriented systems".
That's probably where I stumbled -- it's not the "objects" but the current, common "systems" that are at the wrong level for an introductory University course. Thanks!
For example, I am astounded that folks who teach Squeak in
college haven't done a lot more to make an introductory environment that gets beginners quickly into the many media objects in Squeak. This would be analogous to what we did with the etoys for children, but with more range.
I think the problems are skills, range, and time. It's rare that one finds in the same person(s) the skills at developing classes/curricula, understanding novice programmers and the learning issues, AND implementing introductory programming environments. And even if one can assemble a team with the skills to do all of this, it's a large effort and takes a lot of time. It doesn't HAVE to be a huge effort, but I think our design processes for educational software are young yet.
These are very good points all, but I think I'm still unconvinced as to their necessity. It seems like this could be part of the training + dues for new grad students -- that is, to learn how to take care of and make better the programming environments, etc. This was a tradition in many of the 60s ARPA projects, especially at MIT, Stanford, CMU (then Carnegie Tech) and Berkeley. It still seems like a good idea, at least for special environments that reflect the particular interests and directions of teaching programming, etc. In a similar vein, I'm on the "Engineer of 2020" committee of the NRC/NAE, and I think one of the recommendations we are going to make (I'm pushing hard for this) about training for young engineers is to go though a course of study which includes "3rd World Design" and/or "appropriate technology". The idea here is to learn how to do "human scale" engineering that really provides leverage, but avoids "superhightech" if it isn't necessary. An "Aladdin" kerosene lamp is such an example: the light of an 85 watt bulb from the simplest of materials. It could have been made by hand back in Greek and Roman times (but wasn't because they didn't really understand how to make light from simple materials). No powerplant, no steel mills, etc., but great light! Another is E.F. Schumacher's solution to plowing efficiently with tiny amounts of power and not compacting the soil, etc. A lot of us think that children need something similar to reappear in their toys: where parts are sensible, instead of millions of transistors, etc., locked up in sealed plastic packages. (Back to programming environments) I think it's time for departments who are interested in teaching both majors and nonmajors to make some of their tools and environments for doing so. *And* to have this be part of the training of the majors, especially grad students.
Cheers,
Alan
---------
Mark
--
squeak-dev@lists.squeakfoundation.org