SPrevayler

Stephen Pair spair at acm.org
Fri Feb 28 16:02:03 UTC 2003


Avi Bryant wrote:
> I've been watching the work on SPrevayler with a cocked 
> eyebrow, because I have to admit that I find it an extremely 
> clunky approach, at least in its Java incarnation.  Stephen's 
> suggestion to combine it with REPLServer is interesting, but 
> I think goes in the wrong direction - we don't want to be 
> logging doIts (isn't that what the .changes file does 
> anyway?), and we don't really want to be logging commands, we 
> want to be logging message sends.

[snip]

> At that point you'd have something halfway between Prevayler 
> and an OODB - no transactions, still a single bottleneck for 
> mutation, but much more flexibility in what your commands can 
> look like and much less overhead in implementing them.
> 
> Marco, Stephen, thoughts?

My comments were based solely on having read the docs (I haven't had the
time to actually load the code).  So, I'm sure that your critique of the
implementation is completely valid.

What I was getting at was a deployment scenario that would be easily
manageable.  I've found (with my Chango work) that you need a very clean
separation between what objects comprise your "application" and which
ones are "data" (aka domain).  The temporal evolution requirements of
these two kinds of objects are very different (note: this discussion is
very much related to the one last weekend on imperative vs declarative
program construction).  

Now, having said this, I do realize that in an ideal world, we'd have a
sufficiently robust system that would be capable of accomodating both
sets of needs in a single unified approach.  But, I'm speaking about the
here and now, not the ideal world of the future.

So, if I wanted to deploy a real world systems using a prevaylor style
approach (which by the way, I've seen commercial Smalltalk systems that
use this very approach), this is probably how I would do it:

The transactional requirements for an application are as follows: we
need to start from some well known base; the base VM and image.  We need
to apply transitions to the system that add our application code to that
base (loading change sets, SARs, DVS packages, etc).  In this way, we
have controlled approach to evolving from a base Squeak image to an
image that is designed to host our application (the application may or
may not need to include the meta-model for our data objects).  Sounds
very much like a traditional build process right?

The transactional requirements for our data is as follows: we need to
start from some well know base; we need to install the meta-model (aka
"schema") for our data objects, we need to seed our database with
initial state, we need to capture any instructions to tranform our data,
we need to capture any instructions that transform our data's
meta-model.  We need to log all such instructions to disk for replay in
the event that the system goes down.  On start up, we must first replay
all logged instructions (that we logged since the last checkpoint) and
probably checkpoint (snapshot) the system.  We also will want to
periodically checkpoint the system.  Sounds very much like a traditional
RDBMS right?

At runtime, I would have one Squeak process running my application, and
another hosting my data.  Communications between the two would happen
through something like REPLServer that would essientially enable me to
do anything to the data image (including applying changes to the
meta-model).  These commands are the ones that would be logged (and your
data image would not refer to the clock or any other external resource
directly, instead it would receive any such information through the REPL
interface).

In this scenario, I'm separating the application and data objects (and
data meta-model) into two distinct images based on their differing
requirements in terms of managing temporal evolution.  To deliver an
upgrade of this system, I would deliver a complete new build of the
application image, along with an upgrade script that applies a sequence
of commands (through the REPL interface) to the already deployed "data"
image.  That upgrade script might apply changes to the meta-model,
upgrade existing data objects to a new layout, etc.

On the application image side, in order to keep things more
"smalltalky", I would use remote proxy objects to stand in place of real
domain objects that dispatch REPLServer commands to the data image upon
receiving messages.  On top of this basic interface, I would begin to
evolve a more sophisticated ORB like ability that managed remote object
references, etc.  In fact, this dual image approach would look very much
like many of the GemStone deployment scenarios (where you have VW or VA
connected to a GS server).

- Stephen



More information about the Squeak-dev mailing list