Symbiotic relationship of declarative and imperative systems (was: A declarative model of Smalltalk)

Stephen Pair spair at
Mon Feb 24 18:15:02 UTC 2003


Thank you for taking the time to write that reply.  Your perspective on
this topic is really valuable (in advancing my own thinking, and I'm
sure to many others on this list).  The rest of this email is simply an
attempt to regurgitate what you've just stated in the hopes that someone
might point out any flaws in my understanding.

> I think the real issues have to do with temporal scope of changes and 
> "fences" for metalevels. 

Definitely.  There are and have been a lot of solutions that attempt to
bring some measure of control over the temporal evolution of a system.
A "a declarative spec" is but one of those solutions.  Transactional
systems, Croquet's T-Time, and even the distinction between "code" and
"data" are others.

> We first have to ask ourselves about "meaning" and "interpretation", 
> and whether they are really separate concepts. For example, what 
> meaning does "a declarative spec" actually have without some 
> interpreter (perhaps *us*) being brought to bear?

In my recent efforts at updating my "declarative spec" of, my
"interpreter" is the base squeak image (version 3.4) and the Squeak VM.
My declarative spec is the script that brings all of the code packages
into that base image.  

But, it's also interesting to note that the scope of objects that are
covered by this declarative spec do not include instances of my domain
model.  And that is true even of the "declarative" languages where you
typcially have a database and program that exist independently of one
another.  This is painfully clear whenever you have to deliver an
upgrade to a program that has to migrate data in an existing database to
a new schema.  A system that could address this problem in the context
of both "code" and "data" would certainly be useful.  

In my Chango VM and DB, I find that I have to separate objects that
"live on disk" and objects that "live in memory" because it's easiest to
manage the evolution of objects that live in memory (which are typically
metamodel objects) using a declarative program specification model while
managing the evolution of objects on disk (typically domain objects) is
easiest using a transactional model.  I briefly thought of managing even
metamodel objects using the transactional system, but quickly vanquished
the idea out of fear of the gut-wrenching changes to Squeak that would
be required and the realization that having a transaction open while I
write code would result in ridiculously long running transactions and
impose a severe burden on the transaction system. ;)

In a database system, the declarative spec might be the full transaction
logs that could be used to recreate the data.  If the logs were
sufficiently general (i.e. they had no direct schema knowledge)...then,
you could conceivably upgrade a database by simply creating a new db
with the new schema and then applying the logs.

A declarative spec is nothing more (or less) than a sequence of
operations that transition a system from one state to another (not
unlike an atomic update in the database world).  And, when given some
beginning state, applying those operations will always yield the same
final state.

Regarding "fences" for metalevels, I take this to mean that you want
some measure of control over temporal evolution at the metalevel
boundaries where these fences reside.  That control might take a form
that is similar to a declarative model, a transactional model, or

Given the limitations of the systems of today, I do see a need for
supporting a declarative model of program specification...however,
looking forward, I can also envision systems that provide much more
general and powerful means for managing the temporal evolution of a

- Stephen

More information about the Squeak-dev mailing list