[squeak-dev] [Election] ...is soon upon us! Last day info

Nicolas Cellier nicolas.cellier.aka.nice at gmail.com
Thu Mar 11 19:36:10 UTC 2010


2010/3/11 keith <keith_hodges at yahoo.co.uk>:
>
>
> Yes, Keith, that's too difficult, because you have to know that patchA
> works in Pharo 11231, Pharo 11232 etc...
> Then you also have to know if patchB works in Pharo 11231+patchA,
> Pharo 11232+patchB etc...
> Try the combinations of trunk patches and come back with real facts.
>
> Hi Nicolas,
> Why are you trying to patch a moving target, with fixed patches. Of course
> if the target moves the patches have to move with the target, in the lower
> layers. Higher architectural levels should be isolated from such changes.
> I am saying that you need to patch a fixed target with fixed patches in a
> prescribed order if an order is needed, spend time getting it working, and
> release a new fixed target. The result will be release of images at fixed
> points A and B, with a prescribed detailed process of exactly how you got
> from release A to release B. That is a useful and desirable result because
> it captures the knowledge as well as producing a new image. Capturing the
> knowledge is the important enabler for cross fork working, and for
> maintaining the map of load-able packages across forks.

Agree
That's more or less what happens in trunk.
It goes from fixed point to fixed point (each MCM is a fixed point as
Levente said).
The difference is about documenting the changes. It's a mix of
squeak-dev mail exchanges and MC package comments.
That's what makes trunk development very efficient in trunk (e.g.
compared to Pharo where you first open an issue and then publish in
inbox and wait for pair review).
But of course harder for cousin forks. I agree, other forks are forced
to pay attention to squeak trunk development which costs.
But if the goal is to cherry pick, anyway, the changes are to be
re-analysed and re-done since fixedPoints in lower levels probably
won't be compatible.
Unless the lower levels are shared (like composing different images
combination from a common kernel), but that's not the current model.

> Secondly you divide the target vertically into modules and subsystems, so
> that the problem is split up among different teams/individual experts.
> Thirdly you divide the target horizontally into architectural layers, and
> within each layer you can divide the modifications into a load order and
> group them according their function for clarity and knowledge capture.
> Fourthly you release different versions of the image for different purposes,
> e.g. "base-dev" as the starting point for most kernel innovators such as
> yourself. "~kph/stable" for the intregration and ~nice/stable for your
> completed innovationsdemonstration of my chosen completed innovations, and
> "~whoever/unstable" for the rest.
> =
> In our old bob process this was organised as a set of named tasks, in an
> order calculated from implicit and explicit dependencies. So the essential
> fixes task was loaded first, followed by package upgrades etc and at the end
> deprecated methods were removed, the verision number set and the image
> cleanedUp and saved.
> My new process "grow", we forget using dependencies and instead use a
> semi-rigid framework which gives us a fixed load order.
> We split the image vertically by defining slices of Kernel (e.g. Mutex is a
> discrete slice), and System is divided into packages.
> Horizontally we have up to 9 layers, from 0 to 9, in which 0-3 would be for
> basic core/kernel image use, and for example, seaside would be loaded at
> somewhere around level 4/5.
> Within each layer (particularly the lower layers) there are  12/13 ordered
> load phases. So at layer 0, the lowest level in the kernel, the first phase
> is the bootstrap phase.
> The bootstrap always loads the essential code loading tools, and absolutely
> essential fixes in a primitive manner in the image commandline startUp
> script. This is essential because the code loading tools can never reliably
> load the code loading tools.

Maybe a clone of the loading tools could.

> The phases are applied in alphabetical order, in layer 0 first, then layer 1
> then layer 2 etc.
> 1 #bootstrap - load code loading tools (layer 0 only)
> 2 #fixes - fix actual broken stuff
> 3 #installs - install of packaged code slices
> 4 #moves - refactorings
> 5 #newapi - additions to the base api
> 6 #packages - install of packaged code packages (e.g. MCZ)
> 7 #patches - patches to packages above for this context.
> 8 #parity - fixes providing code parity between forks, that might be needed
> by the package.
> 9 #preferences - code which sets things up the way you like it
> 10 #tests - load tests from a package
> 11 #tidy - (re)organise things to your preference
> 12 #unload - remove things
> 13 #zed - a finalization phase.
> So with an architecture in place you use it to define roles and
> responsibilities, and boundaries to make the problem manageable. Sure I
> agree, in layer 0, things will need to be hand crafted and bespoke to that
> image. However even within this layer you can see what is a fix for a bug
> (#fix) what is a refactoring (#move) that should not change functionality,
> and what is an attempt to provide a #newapi or api parity between forks.
> If you are writing a bug fix, you can address that fix to the virtually
> untouched release image by publishing it as a <project: 'MyFixes' level: 0
>  series: #fix item: 1> e.g.
> If you are adding core #newapi calls, you know you are doing so to the basic
> image before any add on packages are loaded at this level.
> But by the time you get to layer 3, there are three layers of stuff under
> you that can provide your packages some commonalty of api across forks.
> "Grease" would be welcome at layer 2 / 3.
> In practice, we want to deliver a base image for people to use, this image
> is called "base-dev", and it consists of two main parts, the fork specific
> part, and the common part.
> base-dev/Kernel &
> base-common/Kernel-common
> So the bootstrap script fed to the starting image, can be made up of code
> that is either bespoke to a fork, or common to all forks, or both.
> There is a cuis/base-dev, a squeak/base-dev and a pharo/base-dev. The
> bootstrap for all three is assembled from code that is common to all three
> images:
> grow/base-common/Kernel-common/#0--bootstrap
> and code that is specific to all three images
> cuis/base-dev/Kernel/#0--bootstrap
> squeak/base-dev/Kernel/#0--bootstrap
> pharo/base-dev/Kernel/#0--bootstrap

What about concurrent changes between contributors ?
Say I changed a private method A that Levente removes.
That's where MC shines.

> So right from the initial bootstrap script, we have common code loading
> tools across all forks, and we are free to innovate those tools in any way
> we wish. We also have one of the first fixes also included in the bootstrap
> is the new SmalltalkIMage/Smalltalk dividing scheme - #globals #commandLine
> #vm #organization #query #changes etc.
> base-dev provides enough commonality for the import and export code to work
> supporting the same development process on all forks.
> Keith
>

Thanks for taking time to explain your model.
The main problem you will face is that you currently don't control the
fix point neither of trunk, nor of Pharo, maybe not even of Cuis.
The second is to convince squeakers to adopt new model and new tools.
Both tasks are very ambitious...

Nicolas



More information about the Squeak-dev mailing list