The future of SM...

lex at cc.gatech.edu lex at cc.gatech.edu
Fri Jul 16 19:50:03 UTC 2004


goran.krampe at bluefish.se wrote:
> I agree with you Stephane. IMHO the only thing that I can promise is
> that a certain set of *releases* (=versions) work together. I can't
> promise that the newest release of Y works with X 1.2, because that is
> *in the future*.

Individual authors are not in a position to make guarantees about
releases, regardless of the technology.  Arbitrary combinations of
packages can interfere with each other.  Additionally, there are no
guarantees in software engineering to begin with.  Tomorrow the sun will
rise again and someone will find another bug in my code.

If you really think there are guarantees to be made, though, let me give
you two examples to consider:

	1. How do dependencies keep someone from changing the behavior of #new
to automatically call initialize and thus break your code?  Does every
package have a dependency on every part of the system kernel that it
uses?
	
	2. How do you make a guarantee of correctness, when some other package
might modify the "open" menu in away that crashes the system whenever
the "open" menu is used?

And if you *still* like "guarantees", how do you fit post-release bug
reports into the world view?  I guaranteed it yesterday, but now a new
bug has been found; do I un-guarantee the package?  What is the point of
a guarantee that can be un-guaranteed later?  Maybe I should just fix
the bug and post an updated package and stop wrangling with
dependencies.  :)

Aside from the issues with the "guarantees" idea, it has not been
established that the jigsaw puzzle of version-specific dependencies is
going to be practical for tools to work with.  This is a significant
technical problem that versioned dependencies add, and everyone seem to
be waving it off as something to handle in the future.  I don't know
that this is solvable, guys, and no one has made a convincing argument
otherwise.

In addition to those two problems, there are definitely times when the
dependency jigsaw cannot be adequately solved.  Let me run you through a
small example.  Packages A and B depend on Collections:
	
	A 1.0   needs   Collections 1.0
	B 1.0   needs   Collections 1.0

Fine so far, I install all three packages.  Now the collections library
gets updated to 1.1.  I cannot install the upgrade without breaking the
dependencies of A and B!!  So, I wait before installing it, even though
I might be the main developer of package C, which also uses Collections.
 Okay, so eventually tthe author of A, being a great citizen, upgrades
their Collection package and reruns their tests -- shock, nothing brock.
 While they are at it, they add a few class comments, and then post A
1.1 which depends on Collections 1.1.  Drats, I still cannot update my
Collections library, because that will make me uninstall B.  Note that I
still cannot upgrade my Collections library, unless I uninstall B.

Finally, suppose Collections gets updated to 1.2, and finally B gets
around to doing an upgrade.  B 1.1 depends on Collections 1.2.  Now
what?  I *still* cannot upgade B, because it depends on Collections 1.2,
which is inconsistent with all available versions of A.  I end up using
A 1.0, B 1.0, and Collections 1.0.  And I still cannot update the
version of Collections that my C package works with, because I can't
install the Collections update anyway.  If I'm lucky, package A will
release a version that works with 1.2 and I can upgrade everything.  But
notice: int hat case, every single package in the example has had to
*synchronize* their releases, in order to reach a state where the jigsaw
has a solution.  I could as well be unlucky, and A might declare itself
compatible with Collections 1.3.....

It appears that all the packages need to be upgraded in lockstep.  I
can't offer you a proof for the more general case :), but I imagine it
will only get *worse* as more packages enter the picture.  Is it
acceptible that every package needs to be tested with each incremental
version of every other package, and thus achieve the lockstep
progression?  That seems very inefficient if most package updates are
compatibility-preserving bug fixes.


Finally, notice that users almost always want the new versions of all
packages, regardless of what the upsteam author has actually tested
with.  Consider two cases:

	1. They are drawing from a stable stream, e.g. 3.4.  All package
updates are clear bugfixes, and further, all the testing that has
happened has happened with the full set of packages loaded.
	
	2. They are drawing from an unstable stream, e.g. 3.3.  Nothing works
anyway, even if it has been "tested" by upstream, so you may as well
grab the newest stuff and hope for the best.  Back off on occasion if
something breaks.
	
	

In short: dependencies on specific versions don't appear to help with
their stated goal of reliability, they complicate the tools to an
unclear degree, they appear to force development to be lockstep across
all packages, and they actually *remove* the desirable functionality of
upgrading all packages to their newest version.

I don't think this is a good bargain.  There are already proven ways to
get a reliable set of packages together, so let's use one of those
instead.


> Why would I want to throw that information away? 

Sure, keeping the information is fine, and there are probably exciting
things you can do with it.  Just don't use it as dependencies.




Lex Spoon



More information about the Squeak-dev mailing list