Smalltalk = strongly typed?

lex at cc.gatech.edu lex at cc.gatech.edu
Fri Oct 15 20:24:35 UTC 2004



> [*1] Or at least I haven't seen any such meaningful interpretation.


There are proofs that static type checkers achieves some sort of type
safety property, but they rely on carefully defining what counts as a
type error.  IMHO, the definitions are overly strict.  If you say that
Java's checker is type safe, you must define casting errors as not being
type errros (as well as an obscure error in Java involving array types).
 If you say that ML's checker is type safe, then you have to say that it
is not a type error if your code expect one variant in a variant record
type, but you get a different one and the program crashes.

People do this, but I don't find it very meaningful.  Both of these
cases look like type errors to me.  The game boils down to proving that
your type checker finds 100% of the errors that your type checker finds.
 Well duh!

For anyone keeping score, I don't think Java has a proof that its type
checker guarantees type safety, because I don't think full Java even has
a formal *semantics*.  ML, on the other hand, is famous for its proven
type safety, and I'd guess the proofs have been done for many of its
sister functional languages.  As well, there are tons of proofs for type
safety in made up languages based on lambda calculus.  I am not up on
these latter ones; some of them may well have real type safety
guarantees, where "type safety" mean something sensible, but I would be
surprised if anyone can maintain that property when you build it into a
full practical language.

The problem is, once you start dabbling with higher-order operations
like message sending and blocks (ie, lambdas), even such a simple thing
like type analysis becomes very hard to analyze.  It's hard to even
connect the dots and figure out what parts of the program are directly
related to each other, but how can you do type analysis (or any other
analysis) if you can't even do that first step?

Now, don't take me wrong: you may as well do as much checking as is
practical, given a particular language and toolset.  The gray area is
when you start *rejecting* features in order to do more type checks. 
The arguments are fierce, but IMHO it should be the other way around:
insist on a good language and good tools, and reject any type checking
strategies that make them a pain to use.


-Lex


PS -- and why check it all statically, anyway?  It makes perfect sense
to investigate type checks that happen at runtime, like Common Lisp
does.



More information about the Squeak-dev mailing list