Monticello status

Stephane Ducasse ducasse at
Tue Apr 8 07:27:56 UTC 2003

> But it's precisely to be able to load code atomically that I want to do
> that analysis.  Right now what Monticello does is this:
> - load all the code into memory (but not compile it)
> - find the prerequisites of the code
> - see if those prerequisites either are already met by the image, or 
> will
> be met by the loaded code
> - iff all the prerequisites can be met, compile all the code

I imagine that you need to
	- 1 have the global
	- 2 have the pools
	- 3 sort all the classes to get a linearized inheritance chains,
	populating with iv and classVariable.
	- 4 methods
	- Class initialize

I was wondering if compiling in a separate environment would not be the 
that avoid more analysis. I have to check how this was done in Ginsu or 

> Right now, however, instance variables aren't considered prerequisites 
> to
> methods, even though they will not get compiled properly without those
> inst vars existing (and the bugs introduced by that can be really 
> subtle
> and hard to find).  I don't want to find out halfway through loading a
> patch to a package that some of the method changes depend on inst var
> additions that somehow haven't made their way into the patch; I want to
> figure that out through static analysis before I compile a line of 
> code.

I see. In Moose I represent iv access but this is heavy and bloat the 

Because you could have the same problem with method invocation. You 
load a method
	self x y zz bar

and bar does not exist!

So I have the impression that there is a trick like this compilation in 
a separate environment.

> I would also prefer (but this is just aesthetic, perhaps) not to 
> enforce a
> strict ordering like first load all classes, than all inst vars, then 
> all
> methods, etc.  I would rather (and currently use) the general rule that
> all prereqs must be loaded before a code element can be loaded, and
> recursively load these prereqs if necessary.

But the load in one phase and the treatment of the load entity is 
another one.
So the order could be irrelevant still the check have to be done in 

>> About property annotation in our system we can add any new information
>> using this approach: the loader catch error and when a message is not
>> understood store the values as a tagged properties this way the format
>> is open to extensions.
> Yes, but you're not actually going from the code model to live code,
> right?  Dealing with them at the model level is easy; where do I store
> these annotations in the Behaviors and CompiledMethods themselves?

Good question. I think that the trick of Joseph was to have in the 
dead-entities (MethodDefinition, ClassDefinition....)
the information extracted from the load file, then put back in the 
living entities as much information, but as the tools were working on 
the dead-entities (or via the dead-entities (MethodDefinition) to the 
living one (CompiledMethod) ) he could control exactly where the 
information was kept.

What is really interesting is to have this distinction (even some 
people will be certainly afraid because we could have more entity in 
memory) is that you make clear the separation between tool oriented 
entity from run-time one, and I guess that there are already some 
classes that look like tool entity such as methodReference....

In VisualWorks for the UI they have MethodDefinition which can be 
browsed by the UI.
So this was a clear move in that direction.


> Avi
Prof. Dr. Stéphane DUCASSE
  "if you knew today was your last day on earth, what would you do 
different? ...  especially if,
  by doing something different, today might not be your last day on 
earth" Calvin&Hobbes

"The best way to predict the future is to invent it..." Alan Kay.

Open Source Smalltalks:,
Free books for Universities at
Free Online Book at

More information about the Squeak-dev mailing list