Symbiotic relationship of declarative and imperative systems (was: A declarative model of Smalltalk)

Alan Kay Alan.Kay at squeakland.org
Mon Feb 24 15:53:59 UTC 2003


I hesitate to comment here -- because I don't have the energy to 
handle lots of replies -- but ...

We first have to ask ourselves about "meaning" and "interpretation", 
and whether they are really separate concepts. For example, what 
meaning does "a declarative spec" actually have without some 
interpreter (perhaps *us*) being brought to bear? In practical terms, 
how does anyone know whether a declarative spec is consistent and 
means what it is purported to mean? IOW, *any* kinds of 
representations can be at odds with their purpose, and this is why 
they have to be debugged. This is just as true with "proofs" in 
mathematics. For example, Euler was an incredible guesser of theorems 
but an indifferent prover (not by his choice), so generations of PhD 
theses in math have been generated by grad students taking an Euler 
theorem, finding what was wrong with Euler's proof, and then finding 
a better proof!

I think the real issues have to do with temporal scope of changes and 
"fences" for metalevels. All languages' meanings can be changed by 
changing their interpretation systems, the question is when can this 
be done, and how easy is it to do? The whole point of classes in 
early Smalltalk was to have a more flexible type system precisely to 
extend the range of meanings that could be counted on by the 
programmer. This implies that there should be fences around such 
metastructures and it should not be easy to willfully change these 
meanings willynilly at runtime. Some languages make this easy for 
programmers by not allowing such changes to be intermingled with 
ordinary programs. Smalltalk is reflective and so it needs more 
perspective and wisdom on the programmer's part to deal with the 
increased power of expression. I also think that the system should 
have many more fences that warn about metaeffects.

However, I don't see anything new here. It was pretty clear in the 
60s that a Church-Rosser language was very safe with regard to 
meaning. If we think of variables as being simple functions, then it 
is manifest that putting assignment into a language is tantamount to 
allowing a kind of function definition and redefinition willynilly at 
runtime. IOW, assignment is "meta". All of a sudden there are real 
problems with understanding meanings and effects. Some of the early 
work I did with OOP was to try to confine and tame assignment so it 
could be used more safely. Ed Ashcroft's work on LUCID (growing from 
Strachey's and Landin's explication of "what LISP means") provided a 
very nice way to do extremely safe and understandable 
assignment-style programming. You have something very pretty when you 
combine these two approaches.

If you want to write a debugger, etc., in the very language you are 
programming in *and* want things to be safe, then you have to deal 
with fences for metalevels, etc. But, if you are also a real 
designer, then you will want to think of these areas as having 
different privileges and different constraints.

The bottom line, to me at least, is that you want to be able to look 
at a program and have some sense of its meaning -- via what the 
program can tell you directly and indirectly. This is a kind of 
"algebraic" problem. However, one should not be misled into thinking 
a paper spec that uses lots of Greek letters is necessarily any more 
consistent or has any more meaning than a one page interpreter that 
can be run as a program.

Cheers,

Alan


-- 



More information about the Squeak-dev mailing list