Hi Keith,
BANG! (*Silence, nothing happens but Keith is thank god still alive*)
Welcome back! :)
Bye T.
---------------------------------------------------------------- For those of you who dont understand the contents of the above mail: according to [1] Keith suggested: "If I email to squeak-dev again shoot me". Since he posted again there was no other option left than trying to shoot him ;)
[1] http://lists.squeakfoundation.org/pipermail/squeak-dev/2009-December/142238....
On 1/20/10 8:47 PM, "Torsten Bergmann" astares@gmx.de wrote:
Hi Keith,
BANG! (*Silence, nothing happens but Keith is thank god still alive*)
Welcome back! :)
Bye T.
For those of you who dont understand the contents of the above mail: according to [1] Keith suggested: "If I email to squeak-dev again shoot me". Since he posted again there was no other option left than trying to shoot him ;)
[1] http://lists.squeakfoundation.org/pipermail/squeak-dev/2009-December/142238.... ml
Uno se cree que los mato el tiempo y la ausencia, pero su tren vendio boleto de ida y vuelta
Edgar
With all respect Keith, sometimes seems that you hates all the rest of the world, angry with the Board, angry with Andreas, angry with the people not using your projects.....maybe you should change something also.
Cheers. Germán.
2010/1/20 keith keith_hodges@yahoo.co.uk:
Hi Keith,
BANG! (*Silence, nothing happens but Keith is thank god still alive*)
Welcome back! :)
Bye T.
Good point, can we move cuis discussions somewhere else. I don't want to have to interact with the board or Andreas.
Keith
With all respect Keith, sometimes seems that you hates all the rest of the world, angry with the Board, angry with Andreas, angry with the people not using your projects.....maybe you should change something also.
Cheers. Germán.
I don't doubt it, and never have for a second. I don't give excuses for my behaviour.
You have all been very helpful to me, many healing experiences have poured from these recent events. However I am not fully healed yet and comments like this following one don't help.
From the board's report:
"One of the issues regarding the release of Trunk development work is how to release an image that has had many packages removed and yet make those removals easily and obviously available for reinstallation to users who are new to Squeak. Andreas brought this issue up on the squeak-dev mailing in hopes that someone would become interested and come up with a solution"
Do you have any idea how insulting is this? Do you have any idea how insulting is this? Do you have any idea how insulting is this? Do you have any idea how insulting is this?
We did this 2 years ago.
When this need was the starting point of the old 3.11 process, "That moving towards a kernel image as a goal needs tools to manage the removal and subsequent intallation of packages, and that an image with the ability to unload optional packages, is functionally equivalent to an image in which you can load optional packages", off the tools are there. The fact that Andreas hasn't thought about this before, simply points towards the idiocy of the lets hack the image for the sake of it approach. The board already recognised that this approach was not getting us anywhere when they approved my proposal, and the pharo team were proving it.
Not only that ALL the work is finished, and has been in use for more than 18 months. Installer was originally developed for this purpose in 2006, and subsequently Sake/Packages includes dependencies, and version control of package definitions. Again its not perfect, but the point remains, the "community" clamoured for a solution, and we developed it for the community, not for ourselves.
So... "someone who would become interested", I spent 3 years being interested on YOUR BEHALF, not even for my benefit, Installer scripts worked fine for me, I only build an image once a year or so, hand crafting it is just as productive, especially sine out of order loading feature of MC, means that dependencies are not that important.
So "someone who would become interested" ... you can shove it where the sun don't shine, and I will let you know when I eventually get healed.
regards
Keith
2010/1/21 keith keith_hodges@yahoo.co.uk:
With all respect Keith, sometimes seems that you hates all the rest of the world, angry with the Board, angry with Andreas, angry with the people not using your projects.....maybe you should change something also.
Cheers. Germán.
I don't doubt it, and never have for a second. I don't give excuses for my behaviour. You have all been very helpful to me, many healing experiences have poured from these recent events. However I am not fully healed yet and comments like this following one don't help. From the board's report: "One of the issues regarding the release of Trunk development work is how to release an image that has had many packages removed and yet make those removals easily and obviously available for reinstallation to users who are new to Squeak. Andreas brought this issue up on the squeak-dev mailing in hopes that someone would become interested and come up with a solution" Do you have any idea how insulting is this? Do you have any idea how insulting is this? Do you have any idea how insulting is this? Do you have any idea how insulting is this? We did this 2 years ago. When this need was the starting point of the old 3.11 process, "That moving towards a kernel image as a goal needs tools to manage the removal and subsequent intallation of packages, and that an image with the ability to unload optional packages, is functionally equivalent to an image in which you can load optional packages", off the tools are there. The fact that Andreas hasn't thought about this before, simply points towards the idiocy of the lets hack the image for the sake of it approach. The board already recognised that this approach was not getting us anywhere when they approved my proposal, and the pharo team were proving it. Not only that ALL the work is finished, and has been in use for more than 18 months. Installer was originally developed for this purpose in 2006, and subsequently Sake/Packages includes dependencies, and version control of package definitions. Again its not perfect, but the point remains, the "community" clamoured for a solution, and we developed it for the community, not for ourselves.
Once you stop taking this as an insult, or personal attack, and instead ask Andreas why he thinks that none of available solutions is viable for us, and then disscuss it in detail and come to the potential solution, we'll not move ahead. But you seems prefer walking in circles.
So... "someone who would become interested", I spent 3 years being interested on YOUR BEHALF, not even for my benefit, Installer scripts worked fine for me, I only build an image once a year or so, hand crafting it is just as productive, especially sine out of order loading feature of MC, means that dependencies are not that important.
So "someone who would become interested" ... you can shove it where the sun don't shine, and I will let you know when I eventually get healed.
I'll be waiting for this moment.
regards Keith
Igor,
first let us define some terms.
The Squeak Community =================== You and the board appear to define community as "anyone who pipes up on squeak-dev".
Now this definition has caused all manner of problems in the past especially with 3.9, and it resulted in the Pharo split, because the 3.9 team were constantly beaten up by people on squeak-dev. (One of those persons doing the beating happened to be Andreas). This definition of community also includes the average "lurker", who may download the image every now and then to see if it is interesting enough for them to not have to do java any more.
The "team model" adopted and endorsed by the board for many years, whereby the teams had their discussions on a separate mailing list, specifically afforded a layer of protection from the wide ranging opinions of "lurkers". The flagrant violation of this simple social/ political essential layer (which protected Ralph for more than 6 months), is what caused our problems in the first place, when Andreas went to squeak-dev with "this is the new process, I am a board member I can do what I like and I haven't even spoken to the release-team for two months". Andreas was elected on a mandate of helping along a release, and by implication to help along the release-team doing the release. That mandate did not include, using his position on the board to promote his own interests, and that of his company, over that of the communities interests, which I shall outline below.
My definition of "community" is, the body of individuals who have significant code bases built on squeak or squeak like base images, who are tasked with the problem of maintaining packages, for users who are using an ever increasing variety of tools and forks of squeak. My definition of "community" includes all pharo developers, pharo users, cuis developers, and cuis users, cobalt, croquet, etoys, gjallar the lot. For example, I consider Göran, who is tasked with the maintenance of Gjallar, to be the archetypal, community member, whose needs bear understanding and supporting.
So the "lurkers" who download an image occasionally, they pop up on squeak-dev and see "nothing appears to have changed in 4 years". These are the ones that panicked the board, people I had never heard of, and as far as I know had no commits to any current projects i.e. they aren't really part of the community.
The "community" who have actual code running, have seen that the range of tools they can use to make their job easier to move their code about, to test it and to deploy their code for different clients has steadily increased and improved over the past 4 years. Now we have package management systems, improved testing tools (TestReporter), improved MC, Logging, Rio, Sake, Bob, AtomicLoading, Installer, and LPF *** running in all base images.
The Vision ========= The "lurkers" just want to see a new flashy image, that they might try a little project in one day.
The "community" want to see the board provide vision and to promote harmony both philosophically/ideologically, and with technical facilities; harmony among all squeak forks, even those who don't want to be harmonised (i.e. Pharo). The community members want to be able to publish a package (e.g. Magma) and have it be tested to work for everyone, whether they be pharo users or squeak users. The community wants to be able keep its large published, even deployed code bases up to date and bug free.
This means that "the community" don't want yet another fork to support, the communities enthusiasm for "yet more of the same" waned when pharo forked, and no one wanted to volunteer for 3.11 after 3.10. To be honest we were all too busy just keeping our own code bases running, to have to worry about another fork of squeak.
In anticipation of spoon aka squeak 5.0, those of us with big code bases are already anticipating a significant porting effort in the future, we don't want to have another one foisted on us every 6 months.
Where are we now ============== The board has responsibility to the community, the whole community, and I don't just limit that to squeak users.
This means that the board needs to set and understand its terms of engagement, that it is here to provide an "older and wiser supervisory" service different branches of the community, to foster an inclusive "culture" around which interesting innovations and solutions can happen.
It is not here to promote the interests of Andreas, and his company in the following:
1. Producing yet another fork for its own sake, "fork de andreas", and calling it squeak - is not in the interests of the community. 2. Undermining the existing common tools and the progress made, in the past 3 years by the community*** 3. Orphaning existing code bases, not providing any fixes to us, or a migration path. 4. Ignoring users of other forks, and users trapped by inertia in past squeak releases.
My Personal Situation ================= I was funded in part to work on squeak, because my vision aimed to provide tools as a means to harness the creativity of the whole community. This was considered a benefit to both myself and my clients, and my paying clients considered it a bonus to be able to contribute back to the OSS community, from which we also derive great benefits. The vision provided a suite of tools we could use to build and test our production images, and our code base would be able to track the latest developments from whatever source.
The board scuppered all of the above, and the reputation of squeak as a viable solution that is worth contributing back to (i.e and funding!!!!) has been irreparably damaged. So thanks the incompetent fly by the seat of their pants flailings of the board, I loose a small but significant income stream that took several years to build up.
We also believed that modelling an extreme programming approach to testing and releasing, with a goal of bi-monthly releases, would be of value to the community and our own projects. (we would be on squeak 3.14-alpha by now, if Andreas had put his considerable skills to contributing to the release team's work, rather than replacing it)
The fact that the actions of the board, have allowed an individual to take over, without due process, for his own benefit, following the agenda of his company, while at the same time, causing problems for others, is an issue of serious concern that the board should take seriously, and is not. It is the board that is going around in circles, refusing to take the "terms of engagement" request seriously.
I have no use for Multimedia tools in squeak or half of Andreas' ideas, I don't even know or care what they are, and I doubt if Göran gjallar (a project management tool using seaside and magma) has much use for them either. These are loadable things, that should be provided in external packages (again for all base images), Andreas should go off and become a contributing member of the community first before presuming to run it. How many loadable packages are on squeaksource provided and maintained by Andreas, if the answer is none, then he is not really a contributing member of the community, he is merely an informed user, aka lurker. Why would his opinion on a package management solution for the community to use be relevant, if he doesn't have any packages? Andreas develops using a "monolithic image under his control", so effectively do the Pharo team members, yet we, the community had a professed goal of moving towards a kernel image with load-able packages.
I myself made a specific decision NOT to run for the board (though I was asked), because I didn't see any purpose for a board member to be both on the board and on the release team, after all the board had endorsed the release team and therefore had a DUTY to uphold it as a decision making entity in its own right. The release-team has the authority to decide what is in the release, not the board, since that is what the board appointed the release team to do.
Andreas however joined the board specifically as a ploy to mussel through his own agenda which is to release a new image. No mention of the cultural context in which we are all operating, and those of us who are "the community" of package developers, are offered nothing from this new deal.
The core image, as a practical artefact, is the least important part of providing a service to clients, having it change underneath you however, is a big problem. The important part is the code you develop, maintaining it and testing it.
Although "Fork de Andreas", has not adopted LPF, Sake/Packages, Installer, this is not a problem as long as it doesn't break them. However Andreas has actually undermined and broken the use of MANTIS as THE tool for documenting bugs, and providing fixes for us legacy users. Because trunk is a moving target, people are fixing bugs against the moving target, not against releases. This is a change of policy which is not in the interests of the community, and hasn't even been discussed. Bug fixes are not maintained for existing users any more, nor can you provide them to existing users any more. Sake/ Packages can include a bug fix with a package if it is needed. BUT that bug fix has to be published relative to a fixed point, i.e. a release. Throwing fixes willy-nilly into trunk, doesn't help anyone. We had a golden age in which you could load any bug fix you needed direct form Mantis, well now no one is providing fixes there any more. You are forced to use trunk to avail yourself of important fixes, but trunk is a moving target and is pre-release.
The Boards Job ============ The board's job is to consider the following points, and to provide a visionary direction as a solution to the community:
1) My own code base is now orphaned and there is no suitable base image upon which to build, beyond 3.10. I was going to use Pharo, but I find OmniBrowser unusable, and the image is hugely bloated. The trunk based 3.11 will not come tested, it will not come with package management, I will have to port all my own tools, and my own code, I will have to manually regression test seaside, magritte, pier, Rio, Beach etc, and some 60 packages, against the release. Andreas will not do that for me (whereas Bob would have done). The Bob/LPF process enabled me to apply API fixes to legacy images, and so enables a smooth migration path to any new releases.
2) The pain of maintaining my existing 14 or so packages for all forks is getting progressively harder not easier, to the extent that the inability of getting anyone to work together at all is making continued community based development prohibitive.
3) The board's ability to make any decisions whatsoever has been undermined, since it cannot be trusted to support its own decisions, or even to use due-process for anything it does. Thus the board has become THE loose cannon, that it was created to stabilise through the application of diverse elected wisdom. The process of election, which was intended to enable the board to be a source of wisdom, and to be representative of the communities interests, has become the potential source of unpredictable insensitive and partisan disruption.
4) If the board did not exist, or remained as a low key background entity looking after servers, then the community would have a chance to work things out in a process of mutual collaboration. The heavy handed actions of the board, now ensure that for groups sharing common interests, forking, in order to get away form the potential disruption that the board may cause, is the only viable option. This is the opposite of what the community wants.
5) The facilities of the community, can be removed, or re-assigned without due process. Thus scuppering all work done by users of those facilities. I spent a lot of effort building tools for mantis, that would enable mantis to be used as part of the process, effectively. But now Mantis and mantis harvesters have been reassigned to a different purpose. For example, I cant deliver my 3.11 even if I wanted to, because it was bound to and intended to showcase a process and that process included the way mantis was used to collate, query, and manage bugs, in a professional manner for the foreseeable future. Andreas replaced the process with his "New process for contributing to squeak", in ONE email, with NO discussion.
6) Does the board even have a constitution? This email includes the announcement that my teddy bear Cyril is running for the board, in the upcoming elections.
So in conclusion, until the Board, defines what it means by the community, and what it means to be itself, I don't think the community is really able to function.
I have repeatedly asked the board to produce a simple little thing such as a "Terms of Reference", 6 months of complete in-action on this topic, leads me to feel that I would be within my rights to call for resignations.
I am not going in circles Igor, I stated my position clearly at the beginning and I haven't changed. "I will not contribute any further to squeak, until the board has some form of terms of reference, which protects anyone from going through what I went through in the future."
regards
Keith
2010/1/21 keith keith_hodges@yahoo.co.uk:
Igor, first let us define some terms. The Squeak Community =================== You and the board appear to define community as "anyone who pipes up on squeak-dev". Now this definition has caused all manner of problems in the past especially with 3.9, and it resulted in the Pharo split, because the 3.9 team were constantly beaten up by people on squeak-dev. (One of those persons doing the beating happened to be Andreas). This definition of community also includes the average "lurker", who may download the image every now and then to see if it is interesting enough for them to not have to do java any more.
And if there's nothing new to download, in , say over a year, he decides to keep doing java.
The "team model" adopted and endorsed by the board for many years, whereby the teams had their discussions on a separate mailing list, specifically afforded a layer of protection from the wide ranging opinions of "lurkers". The flagrant violation of this simple social/political essential layer (which protected Ralph for more than 6 months), is what caused our problems in the first place, when Andreas went to squeak-dev with "this is the new process, I am a board member I can do what I like and I haven't even spoken to the release-team for two months". Andreas was elected on a mandate of helping along a release, and by implication to help along the release-team doing the release. That mandate did not include, using his position on the board to promote his own interests, and that of his company, over that of the communities interests, which I shall outline below.
I don't even want to comment your allusions concerning Andreas 'conspiracy' and his 'company'. Star Wars and Emperor Palpatine is a fiction, not reality, keep reminding this to yourself. I just want to say: we're all having agendas, which crossing each other, and which is a basement of our common interest - use, develop and promote squeak & techs based on it.
The development of Hydra VM, inclusion of full closure support into Squeak and Pharo made possible because of this company good will to share these artifacts with OSS community. And now you trying to sell us that these facts is indication of far reaching plans to conquer the world? Beware Luke, dark side of power is strong!
My definition of "community" is, the body of individuals who have significant code bases built on squeak or squeak like base images, who are tasked with the problem of maintaining packages, for users who are using an ever increasing variety of tools and forks of squeak. My definition of "community" includes all pharo developers, pharo users, cuis developers, and cuis users, cobalt, croquet, etoys, gjallar the lot. For example, I consider Göran, who is tasked with the maintenance of Gjallar, to be the archetypal, community member, whose needs bear understanding and supporting. So the "lurkers" who download an image occasionally, they pop up on squeak-dev and see "nothing appears to have changed in 4 years". These are the ones that panicked the board, people I had never heard of, and as far as I know had no commits to any current projects i.e. they aren't really part of the community. The "community" who have actual code running, have seen that the range of tools they can use to make their job easier to move their code about, to test it and to deploy their code for different clients has steadily increased and improved over the past 4 years. Now we have package management systems, improved testing tools (TestReporter), improved MC, Logging, Rio, Sake, Bob, AtomicLoading, Installer, and LPF *** running in all base images. The Vision ========= The "lurkers" just want to see a new flashy image, that they might try a little project in one day.
All 'grand' projects growing out from small ones. Which makes little ones as much as valuable as any of your 'grand' project. I'd prefer seeing 10 little projects popping out each month, than 1 grand project popping out once in 2 years.
The "community" want to see the board provide vision and to promote harmony both philosophically/ideologically, and with technical facilities; harmony among all squeak forks, even those who don't want to be harmonised (i.e. Pharo). The community members want to be able to publish a package (e.g. Magma) and have it be tested to work for everyone, whether they be pharo users or squeak users. The community wants to be able keep its large published, even deployed code bases up to date and bug free. This means that "the community" don't want yet another fork to support, the communities enthusiasm for "yet more of the same" waned when pharo forked, and no one wanted to volunteer for 3.11 after 3.10. To be honest we were all too busy just keeping our own code bases running, to have to worry about another fork of squeak. In anticipation of spoon aka squeak 5.0, those of us with big code bases are already anticipating a significant porting effort in the future, we don't want to have another one foisted on us every 6 months. Where are we now ============== The board has responsibility to the community, the whole community, and I don't just limit that to squeak users. This means that the board needs to set and understand its terms of engagement, that it is here to provide an "older and wiser supervisory" service different branches of the community, to foster an inclusive "culture" around which interesting innovations and solutions can happen. It is not here to promote the interests of Andreas, and his company in the following:
- Producing yet another fork for its own sake, "fork de andreas", and
calling it squeak - is not in the interests of the community. 2. Undermining the existing common tools and the progress made, in the past 3 years by the community*** 3. Orphaning existing code bases, not providing any fixes to us, or a migration path. 4. Ignoring users of other forks, and users trapped by inertia in past squeak releases.
Star Wars, Episode V.
[snip]
We had a golden age in which you could load any bug fix you needed direct form Mantis, well now no one is providing fixes there any more. You are forced to use trunk to avail yourself of important fixes, but trunk is a moving target and is pre-release.
Write tool to leverage the trunk activity then. Stop whining about losing control (via Mantis). People like a new and easy way to contribute, otherwise nobody would put any commits in trunk. Is this something that hard to understand?
The Boards Job
----- "I will not contribute any further to
squeak, until the board has some form of terms of reference, which protects anyone from going through what I went through in the future."
Feel free to raise this topic again , before eyes of newly elected board. I will not run for the next year anyways.
regards Keith
All 'grand' projects growing out from small ones. Which makes little ones as much as valuable as any of your 'grand' project. I'd prefer seeing 10 little projects popping out each month, than 1 grand project popping out once in 2 years.
Igor,
I think you need to think about what you are saying more carefully, you are arguing against yourself here.
The trunk process is a "grand project" which hasn't produced anything of any use for 6 months. It signs up the community for an ongoing wait of 12-18 months per release. It ties you in to the "grand project" ethos which we all said we didn't want.
My process produced a stream of useful projects popping up along the way at regular intervals, that is what you are saying you want isnt it?
We set the goal "We don't want an image, we want a kernel, that you can build distributions from." So what do you and the board do? Yep the opposite! You build a monolithic image using a process that can only build a monolithic image.
Furthermore the process I produced (and finished) with Bob, was designed to produce monthly or bi-monthly releases, with all fixes auto documented. Which is also what we wanted. The goal is to "release early and often", so what does the board do, yep exactly the opposite. A new process that will take a year to produce anything, and nothing will be auto-documented.
So you say you want something every month or so, but defend to the hilt the process which will NOT deliver it.
You had a release of Bob, in an image, loaded for you to play with in February (did you download it?), you had a release of Sake/Packages 18 months ago (you told me you didn't use that either). The problem is that you all seem to say you want small projects popping up regularly, but when they pop up you dont use them. Furthermore you seem to be working arduously to make those small projects, which apparently you desperately want, irrelevant, and obsolete, when they are still useful, and in use today.
We want squeak to go forwards not backwards.
There was no grand project, to produce the 3.11, there was a process. In case you don't know, a process is a way of thinking about the problem, and applying yourself to achieve a defined goal. The board approved this goal to provide this process. The process was being used, and was working. It had not quite produced a result yet, but I was making videos showing THE PROCESS which was the deliverable of the 3.11 effort was not an image, but the process that would enable to community to build future images, and that is what I was documenting and making videos of.
The actual deliverable of 3.11 could have been produced any time, the script was written 18 months ago. 60% of it was in LPF already, and we had plenty of users, and a downloadable image. All you had to do was run the script manually. Ken Brown had a go, he was convinced. But the deliverable was not the image it was the process, that is what the board approved, so having a go at me for not producing an image, is moving the goal posts.
The purpose of this process was to support extreme programming practices for all users of squeak, but showcased in the release of 3.11. Now we are locked in to a "wait 18 months for a release process all over again".
I never advocated a grand release process, that's what I am complaining about, the dog returning to its vomit. We already decided that this wasn't the way forward.
Your choice to mock me, master Yoda, is hardly commensurate for a positive way forward.
My point I still don't think you are getting, is that while squeak is running on an 18 month release cycle producing a monolithic image, which only serves the vision of the person who built it, and processes are locked in to that way of thinking, it is already irrelevant, hence my search for something else.
Within the Smalltalk community which invented extreme programming, we are basically a bit of a laughing stock, since we cannot produce a release in 20 minutes.
Keith
On Thu, Jan 21, 2010 at 12:21 PM, keith keith_hodges@yahoo.co.uk wrote:
All 'grand' projects growing out from small ones. Which makes little
ones as much as valuable as any of your 'grand' project. I'd prefer seeing 10 little projects popping out each month, than 1 grand project popping out once in 2 years.
Igor,
I think you need to think about what you are saying more carefully, you are arguing against yourself here.
The trunk process is a "grand project" which hasn't produced anything of any use for 6 months. It signs up the community for an ongoing wait of 12-18 months per release. It ties you in to the "grand project" ethos which we all said we didn't want.
Nonsense. In the past 6 months, just to take three that come to mind, we have closures, native fonts and unloadable smaller traits. There are lots of other things also; go look at the recent changes list. Trunk is actually progressing very nicely.
You might try contributing code rather than whining at length. It's more fun for everyone. Better for the community.
My process produced a stream of useful projects popping up along the way at regular intervals, that is what you are saying you want isnt it?
We set the goal "We don't want an image, we want a kernel, that you can build distributions from." So what do you and the board do? Yep the opposite! You build a monolithic image using a process that can only build a monolithic image.
Furthermore the process I produced (and finished) with Bob, was designed to produce monthly or bi-monthly releases, with all fixes auto documented. Which is also what we wanted. The goal is to "release early and often", so what does the board do, yep exactly the opposite. A new process that will take a year to produce anything, and nothing will be auto-documented.
So you say you want something every month or so, but defend to the hilt the process which will NOT deliver it.
You had a release of Bob, in an image, loaded for you to play with in February (did you download it?), you had a release of Sake/Packages 18 months ago (you told me you didn't use that either). The problem is that you all seem to say you want small projects popping up regularly, but when they pop up you dont use them. Furthermore you seem to be working arduously to make those small projects, which apparently you desperately want, irrelevant, and obsolete, when they are still useful, and in use today.
We want squeak to go forwards not backwards.
There was no grand project, to produce the 3.11, there was a process. In case you don't know, a process is a way of thinking about the problem, and applying yourself to achieve a defined goal. The board approved this goal to provide this process. The process was being used, and was working. It had not quite produced a result yet, but I was making videos showing THE PROCESS which was the deliverable of the 3.11 effort was not an image, but the process that would enable to community to build future images, and that is what I was documenting and making videos of.
The actual deliverable of 3.11 could have been produced any time, the script was written 18 months ago. 60% of it was in LPF already, and we had plenty of users, and a downloadable image. All you had to do was run the script manually. Ken Brown had a go, he was convinced. But the deliverable was not the image it was the process, that is what the board approved, so having a go at me for not producing an image, is moving the goal posts.
The purpose of this process was to support extreme programming practices for all users of squeak, but showcased in the release of 3.11. Now we are locked in to a "wait 18 months for a release process all over again".
I never advocated a grand release process, that's what I am complaining about, the dog returning to its vomit. We already decided that this wasn't the way forward.
Your choice to mock me, master Yoda, is hardly commensurate for a positive way forward.
My point I still don't think you are getting, is that while squeak is running on an 18 month release cycle producing a monolithic image, which only serves the vision of the person who built it, and processes are locked in to that way of thinking, it is already irrelevant, hence my search for something else.
Within the Smalltalk community which invented extreme programming, we are basically a bit of a laughing stock, since we cannot produce a release in 20 minutes.
Keith
Nonsense. In the past 6 months, just to take three that come to mind, we have closures, native fonts and unloadable smaller traits. There are lots of other things also; go look at the recent changes list. Trunk is actually progressing very nicely.
You might try contributing code rather than whining at length. It's more fun for everyone. Better for the community
Where are these things, are they in a released image?
If you had used the bob process they would have been in 3.11, 3.12, and 3.13 released and tested against all 700 loadable packages.
Keith
Nonsense. In the past 6 months, just to take three that come to mind, we have closures, native fonts and unloadable smaller traits. There are lots of other things also; go look at the recent changes list. Trunk is actually progressing very nicely.
You might try contributing code rather than whining at length. It's more fun for everyone. Better for the community
Where are these things, are they in a released image?
If you had used the bob process they would have been in 3.11, 3.12, and 3.13 released and tested against all 700 loadable packages.
There would also be a minimal, seaside, pier, seaside-magma-pier- beach, developer and "fun" one click images amongst others.
Keith
2010/1/21 keith keith_hodges@yahoo.co.uk:
Nonsense. In the past 6 months, just to take three that come to mind, we have closures, native fonts and unloadable smaller traits. There are lots of other things also; go look at the recent changes list. Trunk is actually progressing very nicely.
You might try contributing code rather than whining at length. It's more fun for everyone. Better for the community
Where are these things, are they in a released image?
If you had used the bob process they would have been in 3.11, 3.12, and 3.13 released and tested against all 700 loadable packages.
There would also be a minimal, seaside, pier, seaside-magma-pier-beach, developer and "fun" one click images amongst others.
The key word here is 'would'. Get back, when you can say 'is', otherwise don't expect someone _would_ buy this.
Keith
There would also be a minimal, seaside, pier, seaside-magma-pier- beach, developer and "fun" one click images amongst others.
The key word here is 'would'. Get back, when you can say 'is', otherwise don't expect someone _would_ buy this.
You are an incurable cynic.
I say "was", in reference to 3.10 and "would" in reference to 3.11, 3.12, and 3.13 because they don't exist.
The process for producing these artefacts "was" available and running on a dedicated server.
Bob was building one click images from 3.10. Bob was building developer images, based upon damiens definitions in Sake/Packages form 3.10
The seaside, and seaside magma pier image is simply a varient of the above.
The only stretch in the imagination is the "fun" image because despite a year of requests Edgar refused to put together a Sake/Packages definition that would generate it.
Keith
Nonsense. In the past 6 months, just to take three that come to mind, we have closures, native fonts and unloadable smaller traits. There are lots of other things also; go look at the recent changes list. Trunk is actually progressing very nicely.
You might try contributing code rather than whining at length. It's more fun for everyone. Better for the community
I don't think I can contribute code.
For example, my contributions to Monticello. In case you don't realise it. Monticello cannot load Monticello, so contributing it to trunk will not work, that is why we developed LPF loading process in the first place. But of course, trunk didn't build on 3.10-build did it, so the process chosen for trunk prevents me contributing towards MC or PackageInfo, two areas that have massive changes (now in the bin)
My contributions for a Package manager are loadable, [ Installer install: 'Packages'. ] So they don't have any place in the trunk scheme either.
The trunk process is allowing people that would normally contribute via a changeset to contribute, (well just give me the changesets, then I can use them) and it is enabling people who would normally write a loadable package to contribute, (so just publish the loadable package).
However, fundamental changes to Monticello, and PackageInfo, you can forget it. Goodness knows how you change compiler stuff.
Watch this space, I will submit code for Cuis, because that does have a process I can contribute to, i.e. Juan has not defined process for contributions that prevents contributions.
I always regarded your points as respectable and sensible, so I hope you understand my opinion that the trunk process closed me out. The very ethos of the previous model was to harness all possible contributions, not to prevent people from making them. However I am sure I can rely on you to correct me if I am wrong.
regards
Keith
On Jan 21, 2010, at 12:40 PM, keith wrote:
Nonsense. In the past 6 months, just to take three that come to mind, we have closures, native fonts and unloadable smaller traits. There are lots of other things also; go look at the recent changes list. Trunk is actually progressing very nicely.
You might try contributing code rather than whining at length. It's more fun for everyone. Better for the community
Where are these things, are they in a released image?
http://ftp.squeak.org/current_development/ http://ftp.squeak.org/current_development/Squeak3.11-8472-alpha.zip
I'm guessing from the way this thread is going that you won't count as "released images". Please prove me wrong. I admit that, in a way ,they aren't (as attested by the -alpha tag). But I hope that you admit that they do constitute releases in some meaningful sense.
Josh
The trunk process is a "grand project" which hasn't produced anything of any use for 6 months. It signs up the community for an ongoing wait of 12-18 months per release. It ties you in to the "grand project" ethos which we all said we didn't want.
Nonsense. In the past 6 months, just to take three that come to mind, we have closures, native fonts and unloadable smaller traits. There are lots of other things also; go look at the recent changes list. Trunk is actually progressing very nicely.
Hi Elliot,
First of all these things are of no use to me, because my code base will take about 2 months to port, longer since I now have to port the tools as well.
You might think you are doing the community a service, but actually you haven't provided stuff or "captured knowledge" that the existing community can use easily. All you have done is encouraged some members of the community to abandon the rest of us, moving over to developing stuff for a new image that I cant use, leaving behind compatibility. My production images don't have closures, the customer asks for an update, I cant load the latest seaside if it uses closures for example. This is the same as the pharo "stuff compatibility" attitude and they are also developing stuff I cant use.
For example, Magma runs in Pharo and 3.7 - 3.10, apparently you provide closures, but if closures aren't available as a loadable/ applyable feature for 3.7 then magma has to choose either not bother to use closures, maintain two or three codebases, or drop support for older images. The knowledge to implement closures has been redone 3 times now, in trunk, pharo, and cuis, but I don't have the expertise to do it myself.
I think you misunderstand me my gripe is not about making progress, it is about throwing all the knowledge into one disorganised pot, aka "trunk".
If you had kept the knowledge needed to implement closures as a separate initiative, in a separate repo, with separate change sets, scripts etc then other people could harvest that knowledge, either all or in part, and you would be contributing "knowledge" that I could import into my codebase. Perhaps cobalt would like closures too. Cobalt is not going to be able to move to build on trunk-alpha overnight either, so they are going to have to do it all over again.
The same goes for bug fixes. Previously we had 100 fixes ready to load into 3.10 from mantis, all documented, and supplied in their natural form "changesets". Throwing fixes into trunk, dilutes the knowledge, and makes it only useful for trunk users and no one else. I cant harvest a fix out of trunk, into my image.
I suggested a compromise back in August, that Andreas ignored completely. That if trunk development was split into clearly defined initiatives with separate projects, then we would be able to work together.
Again feel free to correct me if I am wrong.
Keith
On 2010-01-21, at 5:12 PM, keith wrote:
You might think you are doing the community a service, but actually you haven't provided stuff or "captured knowledge" that the existing community can use easily. All you have done is encouraged some members of the community to abandon the rest of us, moving over to developing stuff for a new image that I cant use, leaving behind compatibility. My production images don't have closures, the customer asks for an update, I cant load the latest seaside if it uses closures for example.
Keith I see Eliot's not responded, I think he's busy elsewhere.
But your example here is completely off-base.
The Squeak community resisted doing closures for *years*, and under their breath muttered no friggen closures in squeak, but that's so simple to do...
Why:
(a) It would break things and *force* people to migrate their code at *their expense*.
The VM architects were extremely aware that was just the way it was going to be.
Eliot proposed a clean solution and pushed out the VM changes and changes sets against a older Squeak/Pharo image to exploit it. I build a VM off that so someone could at least run it.
The Pharo community then took the VM, change sets and reviewed their code base for additional fixes, later the same happen with the Squeak trunk.
If there is stuff not converted then ask yourself is anyone supporting that code? If it's your stuff, then either you or others you convince will have to convert it.
-- =========================================================================== John M. McIntosh johnmci@smalltalkconsulting.com Twitter: squeaker68882 Corporate Smalltalk Consulting Ltd. http://www.smalltalkconsulting.com ===========================================================================
Keith I see Eliot's not responded, I think he's busy elsewhere.
But your example here is completely off-base.
The Squeak community resisted doing closures for *years*, and under their breath muttered no friggen closures in squeak, but that's so simple to do...
Why:
(a) It would break things and *force* people to migrate their code at *their expense*.
Correct, so we need a process and tools that will provide the knowledge to ease this as much as possible.
The VM architects were extremely aware that was just the way it was going to be.
Eliot proposed a clean solution and pushed out the VM changes and changes sets against a older Squeak/Pharo image to exploit it. I build a VM off that so someone could at least run it.
Ok, so perhaps I am being a bit unfair, because Elliot at least made his progress in an "offline" initiative, he didn't develop in trunk.
However, manually throwing the result into trunk is not much use to 3.10 users, what is needed is a script and changesets I can apply to my working images, in a repeatable fashion, within an environment that will do the appropariate regression tests.
The Pharo community then took the VM, change sets and reviewed their code base for additional fixes, later the same happen with the Squeak trunk.
Actually I used Bob to build and publish a 3.10 LPF image first I believe.
It was based upon the 3.10 closures image that Elliot or Andreas produced, however i didn't have the knowledge to progress it any further, and when I asked for feedback I got none. I told Elliot and Andreas about the image and the bugs it was throwing up and there was not much response, (I assumed they were fixing the problems), apart form Andreas saying why are you bothering with this "3.11 would be too soon for closures".
If there is stuff not converted then ask yourself is anyone supporting that code? If it's your stuff, then either you or others you convince will have to convert it.
How? I don't know how to. Its all very well for you vm hackers to do this, but the closures changeset hosed the debugger for me, and at that point its beyond me.
I need a script which shows me the end to end process of applying closures to a existing known released image, so that I can retrace those steps on my image. That knowledge would better be captured by a process which requires such contributions, and integrates them, rather than a repository where 10 people are all working at once.
regards, and thanks for your reply on Eliotts behalf
Keith
On Thu, Jan 21, 2010 at 5:12 PM, keith keith_hodges@yahoo.co.uk wrote:
The trunk process is a "grand project" which hasn't produced anything of any use for 6 months. It signs up the community for an ongoing wait of 12-18 months per release. It ties you in to the "grand project" ethos which we all said we didn't want.
Nonsense. In the past 6 months, just to take three that come to mind, we have closures, native fonts and unloadable smaller traits. There are lots of other things also; go look at the recent changes list. Trunk is actually progressing very nicely.
Hi Elliot,
First of all these things are of no use to me, because my code base will take about 2 months to port, longer since I now have to port the tools as well.
But lots of other people are using the progress just fine. Pharo is harvesting work from Squeak and vice verce. All it takes is will and knowledge of the available tools. Monticello is a huge enabler.
You might think you are doing the community a service, but actually you
haven't provided stuff or "captured knowledge" that the existing community can use easily. All you have done is encouraged some members of the community to abandon the rest of us, moving over to developing stuff for a new image that I cant use, leaving behind compatibility. My production images don't have closures, the customer asks for an update, I cant load the latest seaside if it uses closures for example. This is the same as the pharo "stuff compatibility" attitude and they are also developing stuff I cant use.
That's one viewpoint (an intellectual horizon of radius zero as Albert said). From another perspective what I've done is set the stage for a series of VMs which have significantly better performance, the first (in use at Teleplace) showing 5x the performance of the existing VMs, eliminated a major short-comming of the Squeak/Pharo execution core by eliminating non-reentrant blocks and providing the ability to write much cleaner code, and have made Squeak/Pharo execution semantics ANSI-compliant and equivalent to other leading dialects.
If you hadn't spend the last 6 months having a hissy fit you would find you weren't as far behind or as inconvenienced. You might also have participated in porting the closure bootstrap (which does exist as changesets on my blog site and has been adapted to three different Squeak distros so far) to your context. Instead you've chosen to disengage, and make a signally unconstructive return. I find myself unsympathetic to your concerns.
For example, Magma runs in Pharo and 3.7 - 3.10, apparently you provide
closures, but if closures aren't available as a loadable/applyable feature for 3.7 then magma has to choose either not bother to use closures, maintain two or three codebases, or drop support for older images. The knowledge to implement closures has been redone 3 times now, in trunk, pharo, and cuis, but I don't have the expertise to do it myself.
and what would be the point of maintaining an evolving package for old images? Eventually the old becomes the obsolete; the cost-benefit ratio falls below 1. If you want to be a curator then that's up to you, but I get the impression that this community wants to be productive and self-expressive. The past is past.
(& BTW the knowledge on how to implement closures is widespread (mine is based on a lisp implementation strategy), and what you're talking about is the bootstrap, not the implementation).
I think you misunderstand me my gripe is not about making progress, it is about throwing all the knowledge into one disorganised pot, aka "trunk".
Whatever. Looks like you failed over two years to make a new release, got upset when people finally lost patience and started work again, and that you lack the objectivity to realise your part in your misfortune.
If you had kept the knowledge needed to implement closures as a separate initiative, in a separate repo, with separate change sets, scripts etc then other people could harvest that knowledge, either all or in part, and you would be contributing "knowledge" that I could import into my codebase. Perhaps cobalt would like closures too. Cobalt is not going to be able to move to build on trunk-alpha overnight either, so they are going to have to do it all over again.
Again, the change sets are there, but one problem with change sets is the lack of tool support for evolving them. The only way I know is to manually back-merge fixes. Alas I can't afford the time; I have further progress to make.
The same goes for bug fixes. Previously we had 100 fixes ready to load into
3.10 from mantis, all documented, and supplied in their natural form "changesets". Throwing fixes into trunk, dilutes the knowledge, and makes it only useful for trunk users and no one else. I cant harvest a fix out of trunk, into my image.
The usefulness of both changesets and packages and the tensions between them is a really interesting and large topic that I'm not going to address here, but putting things in well-defined packages that can be unloaded from a trunk image does not dilute important knowledge (the change history in a package is richer than in a changeset, as there are multiple changes, each with a comment) and is clearly more useful to users other than trunk users because of the degree of interchange between Pharo, Squeak and Cuis that is obvious at the moment (new text editor, faster buffered file reading, native fonts, etc, etc). These exchanges are being done by people who are happy with Monticello but more interested in the message than the medium. Your criticisms seem very much to me like sour grapes from one who is set in his ways and wants to take his marbles away because others want to play a different game. You were the one who wouldn't release Bob open source.
I suggested a compromise back in August, that Andreas ignored completely.
That if trunk development was split into clearly defined initiatives with separate projects, then we would be able to work together.
Is that what you were doing? I thought you were flinging mud. I certtainly didn't get the impression you were offering a compromise.
Again feel free to correct me if I am wrong.
I think I'm pissing in the breeze. Surprise me if I'm wrong.
Keith
Eliot
If you hadn't spend the last 6 months having a hissy fit you would find you weren't as far behind or as inconvenienced. You might also have participated in porting the closure bootstrap
Eliot,
I did participate in the bootstrap, I thought I was the first to do so, excepting perhaps Andreas.
Bob built an LPF image, with MC1.5 etc, on your first 3.10-closures image, you can download it from ftp.squeak.org and I requested feedback or suggestions as to what to do next with it to get the debugger working and got none.
(which does exist as changesets on my blog site and has been adapted to three different Squeak distros so far) to your context. Instead you've chosen to disengage,
I did not choose to disengage, as I have stated several times, I had no choice, and I still have no choice.
Perhaps a timeline will explain.
1. Due to the unfortunate cancelation of an unrelated client project just a few weeks prior to this I had no money and I was a bit down.
In this period there was a 2 month silence form Andreas and the board, not a dicky-bird. I was working hard on 'bob', at times, and Bob began producing deliverables, documentation and screen casts. Bob was auto- building developer images, and one click images etc.
2. Andreas sent the email "This is THE new process for squeak development" that CANCELLED my work, (talk about kicking man when he is down) it ended it right there, not a single line of code has been written for Bob beyond that day.
Bob needs about an hours work (plus a bit of debugging) to configure the automatic testing facility, and then the bob 3.11 "process development" effort was complete.
3. Since then as a direct result of andreas' email to squeak-dev (when I had specifically asked Andreas to make release discussions in the release-team mailing list) My client and financial situation have effectively forbidden me to work any further on "the bob process", since my paying clients support of "bob" development was based on the concept that the squeak community would be using "bob", and it would be the future platform for release-team work, providing regular updates to the base image, to our production images, and regular regression tested derived images upon which to build and test our production images.
Since the "trunk" based process will only yield an image once every 12-18 months, we might as well just manually rebuild our base production images every 2 years or so, we don't therefore have a pressing need for a continuous integration server any more.
4. When Andreas cancelled "bob" he cancelled my income from bob related tools development, and pissed off my client to the extent that work which had earlier suffered somewhat at the expense of "bob" was now a priority. If I work on bob, I effectively lose the income I do have.
and what would be the point of maintaining an evolving package for old images?
You don't have to evolve the package when you can evolve the image just enough. Using this method LPF loads MC1.5 into Squeak 3.7, but MC1.5 does not limit itself to the lowest common denominator API, MC1.5 is written for the Squeak 3.10 API, LPF evolves the images just enough.
Cuis is based upon Squeak 3.7 Spoon is based upon 3.2 Doesn't dabble db still use 3.7 images as its workhorses.
Gjallar was on 3.8 up until a year or 2 ago, when Installer allowed it to move to 3.10
Eventually the old becomes the obsolete; the cost-benefit ratio falls below 1. If you want to be a curator then that's up to you, but I get the impression that this community wants to be productive and self-expressive. The past is past.
The problem with computers is, you are stuck with what you buy for 20 years or more in some cases. You are one of the lucky ones that gets to always use the latest stuff.
For example, the harrier jump jet nozzle models are written in PDP11 basic, limited to 9999 lines of code, they still have pdp11's
(& BTW the knowledge on how to implement closures is widespread (mine is based on a lisp implementation strategy), and what you're talking about is the bootstrap, not the implementation).
I think you misunderstand me my gripe is not about making progress, it is about throwing all the knowledge into one disorganised pot, aka "trunk".
Whatever. Looks like you failed over two years to make a new release
I didn't fail to make a release, the release wasn't the objective. Andreas finally realised that after 2 months. A version of the release image 3.11 was produced manually by a script 18 months earlier. Ken Brown had a go and did it himself. Anyone can hack an image, it takes a bit longer to produce a continuous integration server that makes an image.
The task we wrote a proposal for to the board was for a "continuous integration PROCESS", NOT an image.
What you forget, or don't know, is that we only made this proposal after the board had outright announced plans to cancel 3.11, and said there would be no further development of 3.x. I.e. The board at the time said we DONT want an image, 3.10 is the end of the line, for 3.x
We piped up and said, ok, but if we had a continuous integration server, that could produce a 3.11, 3.12 etc as stabilising maintenance releases, bringing 3.x to a solid dependable conclusion, in anticipation of the brave new world of Squeak 5.x
Radical "change the world work" was being carried out in Spoon, Squeak 5.0, so Andreas should have taken over spoon, which was over a year past its promised delivery date, without any sign of progress updates.
Andreas and the board moved the goal posts that they had approved without even bothering to talking to us. All of a sudden we are accused of not producing an image, when that wasn't the goal.
It was pretty disingenuous to scupper all that work without even a discussion, or consideration of the implications.
, got upset when people finally lost patience
Like I said the board had cancelled 3.x already.
and started work again, and that you lack the objectivity to realise your part in your misfortune.
No I don't lack objectivity.
We were doing exactly what we had said we would do, and we were at the point of packaging up the final deliverables, and we would have told anyone that talked to us of the situation. That we were no more than a week away from completion and potential delivery of the cherished image. Since the image is auto generated, you simply pick your release date and it generates it according to the status of mantis at the time. So the process of discussion would have been, ok guys we have two weeks to tidy up a few of the mantis reports, and to check things, then we will hit the button and your image will be produced.
The sudden inflammation of the discussion on squeak-dev where complete strangers started asking where is the new image, was a complete surprise, and I didn't even think it was worth replying to at the time, because we had made it clear already in writing, approved by the board that we were not producing one, but the means to produce one.
There are protocols, namely that the release-team is responsible for the releases, and it was Andreas' duty to join the release team, and to work with the leaders, without being contrary and to discuss release ideas on the release-team mailing list, when I had made a specific request for him to do so.
It was extremely disingenuous of him to start the release-team discussions on squeak-dev, when I had explicitly asked him not to because at the time my paying clients were on squeak-dev and could see what was happening. As a result they pulled the financial plug on me, and constrained my freedom to make further benevolent contributions.
You were the one who wouldn't release Bob open source.
I only threatened that in a moment of complete disgust and abject poverty, wondering where I would get my next meal from.
Check the repositories and the licences. I have mentioned several times that Bob is in the repos and all repos are open.
I think I'm pissing in the breeze. Surprise me if I'm wrong.
Nope you are not wrong, because I can't do anything, like I say I have no choice.
Keith
On 1/22/10 4:46 AM, "keith" keith_hodges@yahoo.co.uk wrote:
Bob built an LPF image, with MC1.5 etc, on your first 3.10-closures image, you can download it from ftp.squeak.org http://ftp.squeak.org and I requested feedback or suggestions as to what to do next with it to get the debugger working and got none.
I ignore all rest of nonsense mail except this.
Write exact how to do for get closures on http://ftp.squeak.org/various_images/SqueakLight/MinimalMorphic.7246.zip.
Process of Andreas fail and Cuis updates also fails. I was ignorant about how to get Closures in the most modular and smaller 3.10 compatible thing , so enlighten me.
If the process success I say you was a super master and all us a bunch of fools. If contrary, you says mumble jumble or attack working people , continue my working in silence and ignore you again.
Edgar
Bob built an LPF image, with MC1.5 etc, on your first 3.10-closures image, you can download it from ftp.squeak.org <http:// ftp.squeak.org> and I requested feedback or suggestions as to what to do next with it to get the debugger working and got none.
I ignore all rest of nonsense mail except this.
Write exact how to do for get closures on http://ftp.squeak.org/various_images/SqueakLight/MinimalMorphic.7246.zip .
Process of Andreas fail and Cuis updates also fails. I was ignorant about how to get Closures in the most modular and smaller 3.10 compatible thing , so enlighten me.
If the process success I say you was a super master and all us a bunch of fools. If contrary, you says mumble jumble or attack working people , continue my working in silence and ignore you again.
Edgar
Hi Edgar,
As usual I don' t have a clue what you are talking about. The whole point is that I don't know how to load closures although I was one of the first to give the first closures image a try. Asking me to write you a procedure seems a little misplaced. I am pointing out that no one has made that knowledge available in a form that mere mortals like you or I can use, because the "super masters" are establishing and using processes that apply to their own images only.
For example, whatever process was used to apply closures to 3.10- basic, could be used to apply closures to 3.9, because they are very similar. Pharo was based upon 3.9, so by analogue, if Stefane made a process for applying closures to 3.9, then he could have used that same process as the basis for adding closures to Pharo, and 3.10, but he chose to only do it for pharo.
Whatever process was used to apply closures to 3.8 would also be used to apply closures to cobalt, and etoys.
Therefore if you want closures for MinimalMorphic as far as I know you are on your own, and that was my point.
Keith
On 1/22/10 8:51 AM, "keith" keith_hodges@yahoo.co.uk wrote:
Therefore if you want closures for MinimalMorphic as far as I know you are on your own, and that was my point.
So you don¹t have the answers....
You ever read Alan Kay ?
Because he start this as kind of eco system. Once was Smalltalk, now we have many. Once was only one Squeak. Now we have several.
I do not force any to follow me.
The current 3.11 is my idea, and Andreas have the skill I don¹t have when cook 3.10 and start 3.11.
Which can¹t do as I was kicked in the ass before you was...
All was in web , so I do no repeat things which I say thousand times.
This days I share with my SqueakRos friends and with some here.
The Tunk process works, you like or not.
So or you join Cuis or Trunk or Minimal or Pharo or start your own.
And let the mouse roar again.
I have SqueakLight in several flavours, FunSqueak, and a complete set of working things trough the long way http://www.squeakros.be.tc/ because some bud guy steal http://www.squeakros.org.
And maybe I was old for this now, but know how ³ using a sponge to tighten a screw², in the wise words of a friend .
I run for the Board this year, so old foe vote for me.
Edgar
2010/1/22 Edgar J. De Cleene edgardec2001@yahoo.com.ar:
And let the mouse roar again.
Amen! :)
It is much more fun hacking in Squeak than fighting on mailing lists. First gives you some low-hanging fruits, while second gives nothing. Some fruits will spoil, without being tasted, but this doesn't means that people will even think about stopping growing fruits or buying them.
The Tunk process works, you like or not.
Not for me it doesn't. The first commit I would want to make would break trunk.
In this community the following two approaches do not work.
1. We all use one package in common, it is diverging, it is not easy to maintain. Lets put it in a common place, and work on it together.
2. I have this idea, I wonder if someone else is already doing this, perhaps I can help them.
Ironically, I have gone from being the person most eager and willing to work with others, the person most likely to contribute to someone else's project, to being the person that is least desirable to work with.
My crime? - trusting the board, and trusting anyone to respect the time and effort invested.
Stefane had a choice, shall I join in with the public version of MC1.5 or not. He chose NOT. Stefane had a choice, shall I join in with the public version of SUnit or not. He chose NOT. Andreas had an obvious choice, shall I or shall I not, discuss the idea of trunk with the existing release team, he chose NOT. (when his job on the board was release-team liason!)
Andreas had an obvious choice, shall I or shall I not, see how to help existing release team along, he chose NOT. Andreas had an obvious choice, shall I or shall I not, base trunk on "3.10-build", he chose NOT.
and the rest as they say is history.
This is not a technical problem, some fundamental shifts in thinking appear to be needed, and they aren't happening as far as I can see.
Keith
On 1/22/10 11:04 AM, "keith" keith_hodges@yahoo.co.uk wrote:
to being the person that is least desirable to work with.
Por algo sera...
Edgar
Dear Keith, dear community,
Am 2010-01-22 um 07:46 schrieb keith:
Bob built an LPF image, with MC1.5 etc, on your first 3.10-closures image, you can download it from ftp.squeak.org and I requested feedback or suggestions as to what to do next with it to get the debugger working and got none.
Having followed some discussions, I'm curious how and where I can get Bob for using it as auto-Image-builder. Yet I have watched the screencast by you, Keith, I am unable to find a message on -dev or the screencast pointing me to a place where to get it and who to install/use it.
So Long, -Tobias
On 22 Jan 2010, at 09:57, Tobias Pape wrote:
Dear Keith, dear community,
Am 2010-01-22 um 07:46 schrieb keith:
Bob built an LPF image, with MC1.5 etc, on your first 3.10-closures image, you can download it from ftp.squeak.org and I requested feedback or suggestions as to what to do next with it to get the debugger working and got none.
Having followed some discussions, I'm curious how and where I can get Bob for using it as auto-Image-builder. Yet I have watched the screencast by you, Keith, I am unable to find a message on -dev or the screencast pointing me to a place where to get it and who to install/use it.
So Long, -Tobias
http://www.squeaksource.com/Bob
http://ftp.squeak.org/3.11/images/0.9Bob.zip
combined should give you the latest
Keith
Hello, Am 2010-01-22 um 11:56 schrieb keith:
http://www.squeaksource.com/Bob
http://ftp.squeak.org/3.11/images/0.9Bob.zip
combined should give you the latest
Thanks for your effort. Alas, the first one gives me 'Global: No Access' and i think the second should be http://ftp.squeak.org/3.11/images/bob0.9.zip
Would you mind allowing access to Bob?
So Long, -Tobias (topa)
Hi Tobias,
I have configured my router so you can ftp to an actual bob installation for a while.
ftp://squeak:viewpoints@bob.safeprayer.com
I am also available on irc squeak, and sometimes on skype keith_hodges
regards
Keith
=== New signature: The friendly smalltalker ;-)
On Thu, Jan 21, 2010 at 10:46 PM, keith keith_hodges@yahoo.co.uk wrote:
If you hadn't spend the last 6 months having a hissy fit you would find you weren't as far behind or as inconvenienced. You might also have participated in porting the closure bootstrap
Eliot,
I did participate in the bootstrap, I thought I was the first to do so, excepting perhaps Andreas.
Bob built an LPF image, with MC1.5 etc, on your first 3.10-closures image, you can download it from ftp.squeak.org and I requested feedback or suggestions as to what to do next with it to get the debugger working and got none.
Show me the message. I don't remember any such message. I typically do help in these situations.
(which does exist as changesets on my blog site and has been adapted to three different Squeak distros so far) to your context. Instead you've chosen to disengage,
I did not choose to disengage, as I have stated several times, I had no choice, and I still have no choice.
Perhaps a timeline will explain.
- Due to the unfortunate cancelation of an unrelated client project just a
few weeks prior to this I had no money and I was a bit down.
In this period there was a 2 month silence form Andreas and the board, not a dicky-bird. I was working hard on 'bob', at times, and Bob began producing deliverables, documentation and screen casts. Bob was auto-building developer images, and one click images etc.
- Andreas sent the email "This is THE new process for squeak development"
that CANCELLED my work, (talk about kicking man when he is down) it ended it right there, not a single line of code has been written for Bob beyond that day.
Bob needs about an hours work (plus a bit of debugging) to configure the automatic testing facility, and then the bob 3.11 "process development" effort was complete.
- Since then as a direct result of andreas' email to squeak-dev (when I
had specifically asked Andreas to make release discussions in the release-team mailing list) My client and financial situation have effectively forbidden me to work any further on "the bob process", since my paying clients support of "bob" development was based on the concept that the squeak community would be using "bob", and it would be the future platform for release-team work, providing regular updates to the base image, to our production images, and regular regression tested derived images upon which to build and test our production images.
Since the "trunk" based process will only yield an image once every 12-18 months, we might as well just manually rebuild our base production images every 2 years or so, we don't therefore have a pressing need for a continuous integration server any more.
- When Andreas cancelled "bob" he cancelled my income from bob related
tools development, and pissed off my client to the extent that work which had earlier suffered somewhat at the expense of "bob" was now a priority. If I work on bob, I effectively lose the income I do have.
and what would be the point of maintaining an evolving package for old images?
You don't have to evolve the package when you can evolve the image just enough. Using this method LPF loads MC1.5 into Squeak 3.7, but MC1.5 does not limit itself to the lowest common denominator API, MC1.5 is written for the Squeak 3.10 API, LPF evolves the images just enough.
Cuis is based upon Squeak 3.7 Spoon is based upon 3.2 Doesn't dabble db still use 3.7 images as its workhorses.
Gjallar was on 3.8 up until a year or 2 ago, when Installer allowed it to move to 3.10
Eventually the old becomes the obsolete; the cost-benefit ratio falls below
- If you want to be a curator then that's up to you, but I get the
impression that this community wants to be productive and self-expressive. The past is past.
The problem with computers is, you are stuck with what you buy for 20 years or more in some cases. You are one of the lucky ones that gets to always use the latest stuff.
For example, the harrier jump jet nozzle models are written in PDP11 basic, limited to 9999 lines of code, they still have pdp11's
(& BTW the knowledge on how to implement closures is widespread (mine is based on a lisp implementation strategy), and what you're talking about is the bootstrap, not the implementation).
I think you misunderstand me my gripe is not about making progress, it is about throwing all the knowledge into one disorganised pot, aka "trunk".
Whatever. Looks like you failed over two years to make a new release
I didn't fail to make a release, the release wasn't the objective. Andreas finally realised that after 2 months. A version of the release image 3.11 was produced manually by a script 18 months earlier. Ken Brown had a go and did it himself. Anyone can hack an image, it takes a bit longer to produce a continuous integration server that makes an image.
The task we wrote a proposal for to the board was for a "continuous integration PROCESS", NOT an image.
What you forget, or don't know, is that we only made this proposal after the board had outright announced plans to cancel 3.11, and said there would be no further development of 3.x. I.e. The board at the time said we DONT want an image, 3.10 is the end of the line, for 3.x
We piped up and said, ok, but if we had a continuous integration server, that could produce a 3.11, 3.12 etc as stabilising maintenance releases, bringing 3.x to a solid dependable conclusion, in anticipation of the brave new world of Squeak 5.x
Radical "change the world work" was being carried out in Spoon, Squeak 5.0, so Andreas should have taken over spoon, which was over a year past its promised delivery date, without any sign of progress updates.
Andreas and the board moved the goal posts that they had approved without even bothering to talking to us. All of a sudden we are accused of not producing an image, when that wasn't the goal.
It was pretty disingenuous to scupper all that work without even a discussion, or consideration of the implications.
, got upset when people finally lost patience
Like I said the board had cancelled 3.x already.
and started work again, and that you lack the objectivity to realise your part in your misfortune.
No I don't lack objectivity.
We were doing exactly what we had said we would do, and we were at the point of packaging up the final deliverables, and we would have told anyone that talked to us of the situation. That we were no more than a week away from completion and potential delivery of the cherished image. Since the image is auto generated, you simply pick your release date and it generates it according to the status of mantis at the time. So the process of discussion would have been, ok guys we have two weeks to tidy up a few of the mantis reports, and to check things, then we will hit the button and your image will be produced.
The sudden inflammation of the discussion on squeak-dev where complete strangers started asking where is the new image, was a complete surprise, and I didn't even think it was worth replying to at the time, because we had made it clear already in writing, approved by the board that we were not producing one, but the means to produce one.
There are protocols, namely that the release-team is responsible for the releases, and it was Andreas' duty to join the release team, and to work with the leaders, without being contrary and to discuss release ideas on the release-team mailing list, when I had made a specific request for him to do so.
It was extremely disingenuous of him to start the release-team discussions on squeak-dev, when I had explicitly asked him not to because at the time my paying clients were on squeak-dev and could see what was happening. As a result they pulled the financial plug on me, and constrained my freedom to make further benevolent contributions.
You were the one who wouldn't release Bob open source.
I only threatened that in a moment of complete disgust and abject poverty, wondering where I would get my next meal from.
Check the repositories and the licences. I have mentioned several times that Bob is in the repos and all repos are open.
I'm very glad to hear both that abject poverty is no longer pressing and that Bob is available.
I think I'm pissing in the breeze. Surprise me if I'm wrong.
Nope you are not wrong, because I can't do anything, like I say I have no choice.
I think you have the choice to dump your animus and reengage constructively with the community. I for one have no patience for your careful rationality when it is interleaved with animus, negativity, (from my perspective self-justificatory) rehashing of the past, and belittling of others' contributions. If you haven't already done so, I suggest you do need to see http://www.ted.com/talks/lang/eng/clay_shirky_on_institutions_versus_collabo... think about its implications. You're not helping and few if any are going to work with you while you rant on about how bad and evil some of us are. My gut reaction is "fuck you" and I know I'm not alone. So instead of saying "I did it all, I was betrayed" try and dump that crap and start to contribute. I'm on the verge of unsubscribing from Squeak-dev and Pharo because the communicatins costs are too high. There are hundreds of messages a day, many on "will you commit this?" "great, thanks for committing that", lots of animus messages in this thread, and precious little of the technical communication I participate here for. Can we please get back to writing code, collaborating and making progress with Pharo and Squeak instead of accusing and chatting and (as I'm doing,. bullshitting)? I'm 51 and I'm tired of this crap.
Keith
Hi Eliot,
some thoughts for you...
Criticism may not be agreeable, but it is necessary. It fulfils the same function as pain in the human body. It calls attention to an unhealthy state of things.
-- Winston Churchill
To build may have to be the slow and laborious task of years. To destroy can be the thoughtless act of a single day.
-- again Winston Churchill
Can we please get back to writing code, collaborating and making
progress with Pharo and Squeak
Sure you can, you are everyone's darling.
So 4 years of trying my damnedest to do exactly as you recommend has simply helped my understanding of Jesus' words. We all know the first part of the sentence, however the second has suddenly gained a new ring of truth.
Matthew 7:6 “Do not give dogs what is sacred; do not throw your pearls to pigs. If you do, they may trample them under their feet, and then turn and tear you to pieces."
When you say, ah but that is life in software, its not an idea world, get over it, live with it, the end justifies the means, you side with the pigs.
To the board:
~ Fascism is not defined by the number of its victims, but by the way it kills them. - Satre
regards
victim number 2
Keith
On 23.01.2010, at 13:23, keith wrote:
To the board:
~ Fascism is not defined by the number of its victims, but by the way it kills them. - Satre
regards
victim number 2
Keith
Keith, please refrain from posting to this list, at least until you are willing to do so without insults. This is not acceptable anymore.
In compliance with Godwin's Law, this thread of discussion should stop immediately.
- Bert -
On Fri, Jan 22, 2010 at 01:12:34AM +0000, keith wrote:
The same goes for bug fixes. Previously we had 100 fixes ready to load into 3.10 from mantis, all documented, and supplied in their natural form "changesets".
This is a misconception that really does deserve comment. Those "fixes ready to load" contained errors, pointed to incomplete and obsolete versions of patches, and could not possibly have functioned properly had they been loaded into multiple flavors of the image. The assertion that this strategy was going to work is complete utter nonsense, as is the claim that the project was "almost done".
Regardless of any real or perceived injustices, the plain simple fact is that the emperor had no clothes.
Dave
On 22 Jan 2010, at 12:25, David T. Lewis wrote:
On Fri, Jan 22, 2010 at 01:12:34AM +0000, keith wrote:
The same goes for bug fixes. Previously we had 100 fixes ready to load into 3.10 from mantis, all documented, and supplied in their natural form "changesets".
This is a misconception that really does deserve comment. Those "fixes ready to load" contained errors, pointed to incomplete and obsolete versions of patches, and could not possibly have functioned properly had they been loaded into multiple flavors of the image. The assertion that this strategy was going to work is complete utter nonsense, as is the claim that the project was "almost done".
Regardless of any real or perceived injustices, the plain simple fact is that the emperor had no clothes.
Dave
Incorrect, it worked for me for several years. I produced the first proposed 3.9.1 using this process with bob version 1 in 2006.
LPF used this process successfully, in all versions of squeak for what is now several years.
Squeak 3.10-build included an additional 17 of those fixes.
(When you have a fix that works, you keep the date stamp to ensure someone doesn't change it form under you)
My working images had been running with many of those fixes over and above the 17 mentioned above) for many months. The only difference being that I hand scripted the fixes I wanted because the automatic interface to mantis had not been completed.
With monthly release cycle you only need to have 100-200 fixes per cycle, and you are making reasonable progress. I agree that if you try to go 12 months managing potential fixes this way you will probably run into problems.
regards
Keith
On Jan 21, 2010, at 12:21 PM, keith wrote:
We set the goal "We don't want an image, we want a kernel, that you can build distributions from." So what do you and the board do? Yep the opposite! You build a monolithic image using a process that can only build a monolithic image.
Off the top of my head, what about the fact that Traits are now unloadable/reloadable?
Josh
Josh Gargus wrote:
On Jan 21, 2010, at 12:21 PM, keith wrote:
We set the goal "We don't want an image, we want a kernel, that you can build distributions from." So what do you and the board do? Yep the opposite! You build a monolithic image using a process that can only build a monolithic image.
Off the top of my head, what about the fact that Traits are now unloadable/reloadable?
As well as ReleaseBuilder, ScriptLoader, 311Deprecated, 39Deprecated, Universes, SMLoader, SMBase, Installer-Core, VersionNumberTests, VersionNumber, Services-Base, PreferenceBrowser, Nebraska, CollectionsTests, GraphicsTests, KernelTests, MorphicTests, MultilingualTests, NetworkTests, ToolsTests, TraitsTests, XML-Parser, Traits, SystemChangeNotification-Tests, FlexibleVocabularies, EToys, Protocols, Tests, SUnitGUI.
Cheers, - Andreas
Josh Gargus wrote:
On Jan 21, 2010, at 12:21 PM, keith wrote:
We set the goal "We don't want an image, we want a kernel, that you can build distributions from." So what do you and the board do? Yep the opposite! You build a monolithic image using a process that can only build a monolithic image.
Off the top of my head, what about the fact that Traits are now unloadable/reloadable?
Very good. Is it the same simpler traits as pharo?
As well as ReleaseBuilder, ScriptLoader, 311Deprecated, 39Deprecated, Universes, SMLoader, SMBase, Installer-Core, VersionNumberTests, VersionNumber, Services-Base, PreferenceBrowser, Nebraska, CollectionsTests, GraphicsTests, KernelTests, MorphicTests, MultilingualTests, NetworkTests, ToolsTests, TraitsTests, XML-Parser, Traits, SystemChangeNotification-Tests, FlexibleVocabularies, EToys, Protocols, Tests, SUnitGUI.
We reorganised all the test classes to be more logically organised, next to the package they are testing, while at the same time being unloadable and loadable. However the reorganisation required a different PackageInfo which you cant load using your process. Also MC1.5 has its tests in a separate package.
We made Nebraska unloadable, and put the loadable package in Sake/ Packages, because the process for making things unloadable was to load an unloadable version of Nebraska, over the top of the original.
You haven't actually really done anything then on this topic then.
Keith
On Jan 21, 2010, at 1:34 PM, keith wrote:
Josh Gargus wrote:
On Jan 21, 2010, at 12:21 PM, keith wrote:
We set the goal "We don't want an image, we want a kernel, that you can build distributions from." So what do you and the board do? Yep the opposite! You build a monolithic image using a process that can only build a monolithic image.
Off the top of my head, what about the fact that Traits are now unloadable/reloadable?
Very good. Is it the same simpler traits as pharo?
What's your point? I responded to your false statement that the process "can only build a monolithic image". I'm not talking about cross-fork compatibility here.
Josh
Very good. Is it the same simpler traits as pharo?
What's your point?
I write and maintain packages for users in both squeak and pharo, surely my point is obvious, I don't want extra work, and I dont want to have to maintain separate code bases. I was just hoping for a simple answer, "yes" would be good. I wasn't being cynical, I was just interested to know.
I responded to your false statement that the process "can only build a monolithic image". I'm not talking about cross-fork compatibility here.
I can unload packages from any image with Sake/Packages, I dont need the trunk process for that, but lets see your process build an image without Monticello, or with Morphic3, or Rio instead of FileDirectory.
Basically you are not able to offer anything fundamentally different from what has gone before, I don't want File/Directory in my image I hate it with a passion. My entire motivation for hacking the core in the first place is to get rid of FileDirectory.
"Fail to plan plan to fail", I think the saying goes.
Keith
On Jan 21, 2010, at 3:10 PM, keith wrote:
Very good. Is it the same simpler traits as pharo?
What's your point?
I write and maintain packages for users in both squeak and pharo, surely my point is obvious,
No, it was not obvious. Even upon rereading, it doesn't seem like you're conceding that your statement that the process "can only build a monolithic image" is mistaken... it seems like you're trying to change the subject.
I don't want extra work, and I dont want to have to maintain separate code bases.
Fair enough. I believe that Andreas went to some lengths to maintain compatibility.
I was just hoping for a simple answer, "yes" would be good. I wasn't being cynical, I was just interested to know.
I apologize for my misunderstanding. I believe the answer is no.
I responded to your false statement that the process "can only build a monolithic image". I'm not talking about cross-fork compatibility here.
I can unload packages from any image with Sake/Packages, I dont need the trunk process for that, but lets see your process build an image without Monticello, or with Morphic3, or Rio instead of FileDirectory.
I never claimed that the trunk process is necessary for package unloading. You're the one who claimed that the trunk process can only build a monolithic image, and I debunked it with a counter-example.
I realize that you're responding to emails from many people, but let's please try to follow the argument, otherwise we're all wasting our time.
Basically you are not able to offer anything fundamentally different from what has gone before,
In terms of unloading packages, maybe not (are you now admitting that real progress has been made in that regard?). But in terms of pace of development, obviously so.
I don't want File/Directory in my image I hate it with a passion. My entire motivation for hacking the core in the first place is to get rid of FileDirectory.
Gaaah. No argument there.
Cheers, Josh
"Fail to plan plan to fail", I think the saying goes.
Keith
No, it was not obvious. Even upon rereading, it doesn't seem like you're conceding that your statement that the process "can only build a monolithic image" is mistaken... it seems like you're trying to change the subject.
Josh,
You are correct I am not conceding the point, that you can only build a monolithic image. Could you build an image with the same contents as cuis using this trunk process, I don't think so. Could you maintain and build images based on cuis using this process, again no.
The community has been asking for a kernel minimal image for years, I think it is somewhat short-sighted to propose a development process that cannot actually fulfil the goal.
I never claimed that the trunk process is necessary for package unloading. You're the one who claimed that the trunk process can only build a monolithic image, and I debunked it with a counter- example.
Monolithic image, to me means, an image in which the code loading tools, the compiler, the UI, the transcript etc are all tied in extricably together and will never part.
Basically you are not able to offer anything fundamentally different from what has gone before,
In terms of unloading packages, maybe not (are you now admitting that real progress has been made in that regard?). But in terms of pace of development, obviously so.
Pace of development is no different. If Andreas had said in his original email, "Lets all get cracking on using Keith's stuff that he is documenting for us in the screen casts he released yesterday", then the pace of development would be identical. He didn't instead he said, "hey everyone I knocked this up trunk thing up over a weekend lets all pile in".
Alternatively if Andreas had said, "we now have the test and integration server to integrate new initiatives, so how about proposing some new initiatives, and get cracking on them, now you can be assured that they will be integrated in your allocated slot in the process." then pace would have been much faster, because everyone's contributions would be testable, and usable. Whereas for example if I contribute the message eating Null to trunk, I bet you a tenner Andreas will take it out again.
I didn't pick a slow development process on purpose to thwart people contributing. I didn't ask for direct contributions because we already had about 200 ready to load automatically from mantis thanks to Mr Cellier. In bob 3.11 the image was already tracking mantis, so all you had to do to make a contribution was to switch the status of the fix to "testing" in mantis, and it would be integrated in the next build.
The Bob based process, asks people to work on initiatives in their own development area, and hook the results of those initiatives into a continuous integration and testing process which produces an alpha image. This would arguably be faster, because you wouldn't have people treading on each others toes. Secondly you wouldn't have to integrate anything into the release image that wasn't 100% complete and tested. Thirdly your image is regression tested against all packages that use it, AND there is a mechanism for fixing broken packages and feeding the fixes back to their maintainers. The bob process was not just maintaining the image, it was also able to contribute to maintaining the entire squeak code base. How else would I be able to plan to eradicate FileDirectory.
regards
Keith
Keith, if you still didn't understood my message, let me elaborate. 1. All your ideas/arguments about techical issues is highly rational. 2. But instead of using power of arguments, and looking for compomises, you choosen to fight with everyone, defending your position, up to the point, that it is not really matters, who is right or wrong , and in what they right or wrong. You just subbornly keep fueling a conflict, which nobody, except you want to have. This is irrational.
Here the formula, which you should learn: rational*1000000^100000 + irrational / (100000^100000) = irrational.
Keith, if you still didn't understood my message, let me elaborate.
- All your ideas/arguments about techical issues is highly rational.
- But instead of using power of arguments, and looking for
compomise
1)
I proposed a workable compromise immediately after Andreas' original email, it was ignored!!!!
2)
There is no compromise, you aren't providing me with a workable solution. I can't use trunk, the trunk process is not managing anything of interest or use to me or my client because I was providing tools.
a) tools that trunk cant load and b) and tools that are managed outside of trunk,
Since anything outside of trunk is ignored as a possible solution, because you still have your monolithic blinkers on, and the board is still driven by the idea that "oh my goodness Pharo has a new image and we dont".
You are forgetting that in Bob you have a viable "extreme programming style continuous integration development platform", when in Pharo they don't, but Bob is ONLY viable IF YOU USE IT.
When you have a project manager whose goal is to produce an image with fancy fonts (lol), rather than a "comprehensive extreme programming capable development platform", you are going to produce conflicting strategies.
You are all so wrapped up in hacking trunk, you aren't the slightest bit interested in moving forward on the tools. Sure Andreas wants a package manager, on his terms only though. I cant port my production images to trunk easily, and when 3.11 is released, I could spend 2/3 months porting my stuff, but since you lot wouldn't even do me the courtesy of starting with 3.10-build, you make it harder for me. You are also probably going to pick a different package management system so all my current package definitions will have to be redone from scratch, any work I do or have done on tools has been discarded by you lot.
I rejoined squeak-dev and started the fight again, because I suddenly realised that you had actually left me in a position where I had no way forward at all, I had thought I could move to pharo, until I actually tried their image, ugh, what a disaster (OmniBrowser) and guess what, all my package definitions will have to be redone from scratch.
Anyhow in six months time the client is going to say, "what shall we do next" and I will end up saying, why don't you get someone else to do it in python or whatever you like, because squeak is not a viable platform any more, nor is pharo.
The viability of the platform is entirely bound up in the politics, and the ability of the community to collaborate and share stuff more than the technical side of things. Witnessing one individual overrule that the way forward is a hacking away unplanned process, to produce an incompatible image, without any tools being provided for us to use in our work, is damaging for the viability of squeak as a professional development platform. Then it appears that there isn't the will in the community to develop, provide, support and USE the tools we need to do a professional tested job.
The current focus of the community as endorsed by the board is on hacking at trunk which is an irrelevant task as far as we are concerned since our starting image is basically arbitrary because we use an automated build, 3.8 works fine for us. Cuis is an attractive target as anything because it is small and fast. (but it has no tools yet!)
However now that the "release-team" is hacking at trunk, rather than providing a working process and tools which we could adopt in-house to do a good professional job for our clients. Squeak ceases to be an interesting platform because it hasn't got any continuous integration tools, it has no vision for such tools, and those it has got have just been discarded by the community without a second thought.
Now if I continue to develop these tools for my use only, while you are all hell bent on building trunk and doing everything is exactly the opposite way that is of any actual use, using either no tools or other tools that are not compatible with my tools. I will not and cannot compete, you win, and my client will end up back with python where they started.
Juan has it right, his vision is to produce "the best kernel he can", but not on any account to interfere with the users of the kernel and what they might want to do with it. This frees me up to implement "a grand vision", without having it trashed at a moments notice, by someone else coming along with a lesser vision.
My client chose squeak because of the potential and the open collaborative dynamics of the community that they saw on irc that were interested in tools, and extreme programming approaches, particularly release often, and test always. If I use BOB to build and regression test my code it only makes sense if the seaside team also uses Bob to build and regression test their code and the same goes for magma etc etc.
Those dynamics and collaborative dynamics have now gone, as has the interest in tools and its a case of what Andreas says goes, and what Andreas says is we are going to produce a 3.11 image come hell or high water without tools, without regression testing and without a rapid release cycle integrating carefully planned and proven projects which are separately published. Neither the squeak board or the trunk developers are doing anything to make squeak a first class development platform that is developed daily using continuous integration and extreme programming tools and techniques.
There is this new 3.11 image promised down the line, but its not developed with our needs in mind, it isn't developed with the tools we want to use, it isn't tested against the packages we want to use yet, and uncle Tom Cobbly (anyone) can change an API at the drop of a hat. Oh and it will be a year until the subsequent release.
3)
The board is not providing a compromise basis either, since it refuses to provide "terms of reference".
s, you choosen to fight with everyone,
I am only virtually fighting against the impossible situation you have put me in, and the complete lack of thinking going on here. To be clear once more, I can build an production image on any base image, so the trunk image itself is irrelevant. What is relevant is the process by which the 3.11 image is developed on an ongoing basis, and the bug fixes that are published against individual releases. If I choose 3.11 as the basis for a project I want to know that bug fixes will be provided for THAT image, not for trunk2, the client wants to know that bug fixes are available for an image that has a bug found in it, not an image that we have to wait 18 months for.
I am also fighting against those who don't bother communicating. For example Craig where are you, you are as responsible for this mess as anyone else. Andreas nor the board didn't email me for 2 months to ask for a progress update. You will see I mellow somewhat as people who enter into a conversation with me, I mellow. Elliot talks sense, and Josh too.
But no matter how much sense they make, no compromise is being offered, because you still see the future of squeak as an image release, and I see it as a development platform and series of processes that need tools (which don't care about the image). While the board puts the power into Andreas' hands to dictate the future of squeak, there is no future of squeak the development platform, there is only a 3.11 image, big deal.
defending your position, up to the point, that it is not really matters, who is right or wrong , and in what they right or wrong. You just subbornly keep fueling a conflict, which nobody, except you want to have.
While you continue to use the argument "the end justifies the means" you are picking the fight. Just because no one stands up to you, doesn't mean it is an acceptable attitude to have. The fact that "the end" in this case is to divert valuable resources into an minimally relevant hole called "trunk".
Plot the number of users of images derived from 3.7, 3.8, 3.9, and 3.10 see how it falls. 3.8 still has the most users, 3.11 will have about 20 users max by the time you have finished.
Like I said "Terms of Reference" are important, but a new image developed without tools is not.
The deliverable of squeak, is a published image, AND a toolset for building and deriving distribution images, AND up to date package definitions for all packages, AND a Bug Tracker that is used to publish Bug fixes against said published image.
You are right I don't expect any compromise to be possible. Andreas has it nailed up so there isn't any possible. But at least this way people might actually think...
cheers
Keith
Oh and if you want to actually remove SUnit there is a dummy stub package in squeaksource.com/Testing
Keith
2010/1/21 keith keith_hodges@yahoo.co.uk:
All 'grand' projects growing out from small ones. Which makes little ones as much as valuable as any of your 'grand' project. I'd prefer seeing 10 little projects popping out each month, than 1 grand project popping out once in 2 years.
Igor,
I think you need to think about what you are saying more carefully, you are arguing against yourself here.
The trunk process is a "grand project" which hasn't produced anything of any use for 6 months. It signs up the community for an ongoing wait of 12-18 months per release. It ties you in to the "grand project" ethos which we all said we didn't want.
My process produced a stream of useful projects popping up along the way at regular intervals, that is what you are saying you want isnt it?
We set the goal "We don't want an image, we want a kernel, that you can build distributions from." So what do you and the board do? Yep the opposite! You build a monolithic image using a process that can only build a monolithic image.
Furthermore the process I produced (and finished) with Bob, was designed to produce monthly or bi-monthly releases, with all fixes auto documented. Which is also what we wanted. The goal is to "release early and often", so what does the board do, yep exactly the opposite. A new process that will take a year to produce anything, and nothing will be auto-documented.
So you say you want something every month or so, but defend to the hilt the process which will NOT deliver it.
You had a release of Bob, in an image, loaded for you to play with in February (did you download it?), you had a release of Sake/Packages 18 months ago (you told me you didn't use that either). The problem is that you all seem to say you want small projects popping up regularly, but when they pop up you dont use them. Furthermore you seem to be working arduously to make those small projects, which apparently you desperately want, irrelevant, and obsolete, when they are still useful, and in use today.
We want squeak to go forwards not backwards.
There was no grand project, to produce the 3.11, there was a process. In case you don't know, a process is a way of thinking about the problem, and applying yourself to achieve a defined goal. The board approved this goal to provide this process. The process was being used, and was working. It had not quite produced a result yet, but I was making videos showing THE PROCESS which was the deliverable of the 3.11 effort was not an image, but the process that would enable to community to build future images, and that is what I was documenting and making videos of.
The actual deliverable of 3.11 could have been produced any time, the script was written 18 months ago. 60% of it was in LPF already, and we had plenty of users, and a downloadable image. All you had to do was run the script manually. Ken Brown had a go, he was convinced. But the deliverable was not the image it was the process, that is what the board approved, so having a go at me for not producing an image, is moving the goal posts.
The purpose of this process was to support extreme programming practices for all users of squeak, but showcased in the release of 3.11. Now we are locked in to a "wait 18 months for a release process all over again".
I never advocated a grand release process, that's what I am complaining about, the dog returning to its vomit. We already decided that this wasn't the way forward.
Your choice to mock me, master Yoda, is hardly commensurate for a positive way forward.
I am mocking, because you seem walking in circles. And you presented a number of valid points in this topic. But that's not the point. My point that theiy don't having any value, if you don't stop being destructive, destroying a good relations with everyone you know here. Who's gonna follow you? Who's gonna help you with implementing them? Who's gonna use the artifacts you made, knowing that at some moment, you can declare everyone as enemy an simply stop being supportive?
My point I still don't think you are getting, is that while squeak is running on an 18 month release cycle producing a monolithic image, which only serves the vision of the person who built it, and processes are locked in to that way of thinking, it is already irrelevant, hence my search for something else.
Within the Smalltalk community which invented extreme programming, we are basically a bit of a laughing stock, since we cannot produce a release in 20 minutes.
Keith
Write tool to leverage the trunk activity then. Stop whining about losing control (via Mantis). People like a new and easy way to contribute, otherwise nobody would put any commits in trunk. Is this something that hard to understand?
Is there somewhere a list of the users that have commited changes to the trunk? Or how many people have been contributing with this new model?
I for sure have seen commits from Levente Nicolas Cellier Andreas Raab
Who else? Is an honest question.
El jue, 21-01-2010 a las 22:53 -0600, Miguel Enrique Cobá Martinez escribió:
Write tool to leverage the trunk activity then. Stop whining about losing control (via Mantis). People like a new and easy way to contribute, otherwise nobody would put any commits in trunk. Is this something that hard to understand?
Is there somewhere a list of the users that have commited changes to the trunk? Or how many people have been contributing with this new model?
I for sure have seen commits from Levente Nicolas Cellier Andreas Raab
And of course you Igor.
Even better would be a list with number of commits per person. Just to know.
Who else? Is an honest question.
Thanks
Hi,
Am 22.01.2010 um 05:58 schrieb Miguel Enrique Cobá Martinez <miguel.coba@gmail.co m>:
Even better would be a list with number of commits per person. Just to know.
... to know what?
Who else? Is an honest question.
I've committed a thing or two, but these things are much much less important and relevant than the contributions of the people you already mentioned.
I don't know if there's a list in the form you asked for somewhere, but the long list on http://source.squeak.org/trunk/ could surely be used to extract that information with a simple shell script ... too bad I'm not on my Mac right now. :-)
Best,
Michael
Miguel Enrique Cobá Martinez wrote:
Is there somewhere a list of the users that have commited changes to the trunk?
Try this:
bag := Bag new. (MCHttpRepository location: 'http://source.squeak.org/trunk' user: '' password: '') allFileNames do:[:fname| (fname endsWith: '.mcz') ifTrue:[ bag add: ((fname copyAfterLast: $-) copyUpTo: $.)]]. bag sortedCounts.
Cheers, - Andreas
Hi,
Am 22.01.2010 um 06:18 schrieb Andreas Raab andreas.raab@gmx.de:
Try this: ...
my bad. Why did I write "shell script"? ;-)
Thanks,
Michael
El jue, 21-01-2010 a las 21:18 -0800, Andreas Raab escribió:
Miguel Enrique Cobá Martinez wrote:
Is there somewhere a list of the users that have commited changes to the trunk?
Try this:
bag := Bag new. (MCHttpRepository location: 'http://source.squeak.org/trunk' user: '' password: '') allFileNames do:[:fname| (fname endsWith: '.mcz') ifTrue:[ bag add: ((fname copyAfterLast: $-) copyUpTo: $.)]]. bag sortedCounts.
With this quickly modified code:
bag := Bag new. (MCHttpRepository location: 'http://source.squeak.org/trunk' user: '' password: '') allFileNames do:[:fname| (fname endsWith: '.mcz') ifTrue:[ bag add: ((fname copyAfterLast: $-) copyUpTo: $.)]]. counts := bag sortedCounts. totalCommits := counts detectSum: [ :each | each key ]. totalCommiters := counts size. counts collect: [ :each | Array with: each value with: each key with: (each key / totalCommits * 100 asFloat).]
I get:
an OrderedCollection( #('ar' 517 34.14795244385733) #('nice' 465 30.71334214002642) #('ul' 128 8.45442536327609) #('dtl' 110 7.26552179656539) #('edc' 41 2.708058124174372) #('jmv' 35 2.311756935270806) #('laza' 26 1.717305151915456) #('jcg' 21 1.387054161162483) #('mha' 18 1.1889035667107) #('bf' 17 1.122853368560106) #('auto' 16 1.056803170409511) #('kb' 15 0.990752972258917) #('tfel' 11 0.726552179656539) #('rkrk' 11 0.726552179656539) #('eem' 9 0.59445178335535) #('cwp' 8 0.528401585204756) #('md' 8 0.528401585204756) #('rss' 7 0.462351387054161) #('bp' 6 0.3963011889035666) #('MAD' 5 0.330250990752972) #('Igor' 5 0.330250990752972) #('cbc' 4 0.264200792602378) #('tbn' 3 0.1981505944517833) #('sd' 3 0.1981505944517833) #('ml' 2 0.132100396301189) #('al' 2 0.132100396301189) #('it' 2 0.132100396301189) #('klc' 2 0.132100396301189) #('rej' 1 0.0660501981505945) #('gsa' 1 0.0660501981505945) #('MarcoSchmidt' 1 0.0660501981505945) #('sm' 1 0.0660501981505945) #('HenrikSperreJohansen' 1 0.0660501981505945) #('mir' 1 0.0660501981505945) #('ls' 1 0.0660501981505945) #('enno' 1 0.0660501981505945) #('dc' 1 0.0660501981505945) #('jdr' 1 0.0660501981505945) #('gk' 1 0.0660501981505945) #('dew' 1 0.0660501981505945) #('stephaneducasse' 1 0.0660501981505945) #('p4s' 1 0.0660501981505945) #('hpt' 1 0.0660501981505945))
there are 44 commiters, 15 of them just commited 1 fix. 5 of the commited 2 fixes 10 of them commited between 3 and 10 fixes 10 of them commited between 11 and 41 fixes
And this four commiters are the real, in practical terms, commiters and driving directions of pharo.
#('ar' 517 34.14795244385733) #('nice' 465 30.71334214002642) #('ul' 128 8.45442536327609) #('dtl' 110 7.26552179656539)
they together have made 34.14 + 30.71 + 8.45 + 7.26 = 80.56% of all the commits.
To me this is a community of four (or 14 adding the next 10 most frequent commiters).
Interesting.
Cheers,
- Andreas
El vie, 22-01-2010 a las 07:23 +0100, Michael Haupt escribió:
Hi Miguel,
Am 22.01.2010 um 07:10 schrieb Miguel Enrique Cobá Martinez <miguel.coba@gmail.co m>:
To me this is a community of four (or 14 adding the next 10 most frequent commiters).
isn't that definition of "community" a bit narrow?
Well the trunk was create to allow the community to easily and readily to contribute to the code in squeak isn't?
But whatever, my point is that 4 people account for 80% of the changes made to the trunk image since it was born.
Just a fact.
Best,
Michael
Miguel Enrique Cobá Martinez wrote:
El vie, 22-01-2010 a las 07:23 +0100, Michael Haupt escribió:
Hi Miguel,
Am 22.01.2010 um 07:10 schrieb Miguel Enrique Cobá Martinez <miguel.coba@gmail.co m>:
To me this is a community of four (or 14 adding the next 10 most frequent commiters).
isn't that definition of "community" a bit narrow?
Well the trunk was create to allow the community to easily and readily to contribute to the code in squeak isn't?
Yes, but absolute numbers are no measure for that. You'll have to compare the number of commits over some period of time to make a relevant comparison. For example, you could compare it to the number of change sets that have been posted on Mantis for a given period of time.
But whatever, my point is that 4 people account for 80% of the changes made to the trunk image since it was born.
80% of the *commits*, not 80% of the *changes*. Number of commits is no adequate measure for the impact of changes. You're leaving out (for example) Juan's work (text editors, fonts), Eliot's work (closures, debugger), and Igor's work (method trailers etc).
Cheers, - Andreas
Hi Miguel,
2010/1/22 Miguel Enrique Cobá Martinez miguel.coba@gmail.com:
isn't that definition of "community" a bit narrow?
Well the trunk was create to allow the community to easily and readily to contribute to the code in squeak isn't?
right. I wouldn't go as far as to narrow down the idea of community to just the people that commit much in terms of code. There are other kinds of contributions, and other ways to contribute. I also think that people who just use the trunk image for development belong to the community.
There's the core developers, and there's the community; those who shape the ecosystem, and those who inhabit it.
But whatever, my point is that 4 people account for 80% of the changes made to the trunk image since it was born.
That is a correct observation, but again you're only measuring contributions that were made in terms of committing to the source code repository.
BTW I don't know how those numbers look in other, comparable, OSS communities. Could it be the same?
Best,
Michael
Michael Haupt wrote:
BTW I don't know how those numbers look in other, comparable, OSS communities. Could it be the same?
Easy to find out. Using the "80% rule" (i.e., how many contributors make up at least 80% of the commits) we end up with:
- http://source.squeak.org/39a: 2 contributors (sd, md) - http://source.squeak.org/310: 1 contributor (edc) - http://squeaksource.com/Pharo: 2 contributors (sd, md) - http://squeaksource.com/Seaside30: 3 contributors (lr, jf, pmm)
Cheers, - Andreas
Hi Andreas,
On Fri, Jan 22, 2010 at 8:38 AM, Andreas Raab andreas.raab@gmx.de wrote:
Easy to find out. Using the "80% rule" (i.e., how many contributors make up at least 80% of the commits) we end up with:
- http://source.squeak.org/39a: 2 contributors (sd, md)
- http://source.squeak.org/310: 1 contributor (edc)
- http://squeaksource.com/Pharo: 2 contributors (sd, md)
- http://squeaksource.com/Seaside30: 3 contributors (lr, jf, pmm)
interesting, thanks; but these are all Squeak-ish in some ways. Is there a statistics facility for such questions on SourceForge? Going to look ...
Best,
Michael
Michael Haupt wrote:
Hi Andreas,
On Fri, Jan 22, 2010 at 8:38 AM, Andreas Raab andreas.raab@gmx.de wrote:
Easy to find out. Using the "80% rule" (i.e., how many contributors make up at least 80% of the commits) we end up with:
- http://source.squeak.org/39a: 2 contributors (sd, md)
- http://source.squeak.org/310: 1 contributor (edc)
- http://squeaksource.com/Pharo: 2 contributors (sd, md)
- http://squeaksource.com/Seaside30: 3 contributors (lr, jf, pmm)
interesting, thanks; but these are all Squeak-ish in some ways. Is there a statistics facility for such questions on SourceForge? Going to look ...
Dunno. The only other thing I saw was about Linux kernel development here:
https://www.linuxfoundation.org/publications/linuxkerneldevelopment.php
But these numbers are so out there that they're not comparable. I mean 700+ developers from 90+ companies :-)
Cheers, - Andreas
Hi Andreas,
On Fri, Jan 22, 2010 at 8:47 AM, Andreas Raab andreas.raab@gmx.de wrote:
interesting, thanks; but these are all Squeak-ish in some ways. Is there a statistics facility for such questions on SourceForge? Going to look ...
Dunno. The only other thing I saw was about Linux kernel development here:
https://www.linuxfoundation.org/publications/linuxkerneldevelopment.php
But these numbers are so out there that they're not comparable. I mean 700+ developers from 90+ companies :-)
right ... well, I've just sat down with two students and found some numbers for Rails and Ruby. The Rails numbers were determined from the git repository, so they have to be taken with one or more grains of salt; the Ruby numbers are from SVN.
Following the 80 % rule, the Rails community consists of 12 individuals, and so does the Ruby community.
I never thought those communities were *so* small given the relevance of Rails and Ruby. Wow.
Please forgive the irony,
Michael
On Fri, Jan 22, 2010 at 10:25 AM, Michael Haupt wrote:
I never thought those communities were *so* small given the relevance of Rails and Ruby. Wow.
Clay Shirky has been commenting on this since 2003. Contributions to any open source project follows the same power law distribution.
That is the power of open source: The fact that someone makes only one commit to Squeak trunk ever, is of no importance.
Whether that fix was a *good* and *useful* fix is what is important. A lot of value is to be had in the long tail :-)
For more, watch this: http://www.ted.com/talks/clay_shirky_on_institutions_versus_collaboration.ht... Read this: http://www.shirky.com/writings/powerlaw_weblog.html
Danie Roux wrote:
Clay Shirky has been commenting on this since 2003. Contributions to any open source project follows the same power law distribution.
That is the power of open source: The fact that someone makes only one commit to Squeak trunk ever, is of no importance.
Whether that fix was a *good* and *useful* fix is what is important. A lot of value is to be had in the long tail :-)
For more, watch this: http://www.ted.com/talks/clay_shirky_on_institutions_versus_collaboration.ht... Read this: http://www.shirky.com/writings/powerlaw_weblog.html
Thank you, thank you, thank you. Folks, this truly is a must-watch.
Cheers, - Andreas
2010/1/22 Andreas Raab andreas.raab@gmx.de:
Michael Haupt wrote:
BTW I don't know how those numbers look in other, comparable, OSS communities. Could it be the same?
Easy to find out. Using the "80% rule" (i.e., how many contributors make up at least 80% of the commits) we end up with:
- http://source.squeak.org/39a: 2 contributors (sd, md)
- http://source.squeak.org/310: 1 contributor (edc)
- http://squeaksource.com/Pharo: 2 contributors (sd, md)
- http://squeaksource.com/Seaside30: 3 contributors (lr, jf, pmm)
Cheers, - Andreas
Even the number of methods changed is an unfair measurement. Each change does not have the same value. The number of features added or improved counts more than the number of methods changed to achieve this goal. Personnaly, I would count number of methods removed or made unloadable/reloadable. Maybe a single change makes 100 methods unloadable.
Nicolas
On Fri, 22 Jan 2010, Miguel Enrique Cobá Martinez wrote:
there are 44 commiters, 15 of them just commited 1 fix. 5 of the commited 2 fixes 10 of them commited between 3 and 10 fixes 10 of them commited between 11 and 41 fixes
And this four commiters are the real, in practical terms, commiters and driving directions of pharo.
#('ar' 517 34.14795244385733) #('nice' 465 30.71334214002642) #('ul' 128 8.45442536327609) #('dtl' 110 7.26552179656539)
they together have made 34.14 + 30.71 + 8.45 + 7.26 = 80.56% of all the commits.
To me this is a community of four (or 14 adding the next 10 most frequent commiters).
Don't forget that most of the time contributions uploaded to the Inbox get into the Trunk with different initials because of merging. If you want to get a better picture (which will still be far from perfect, because of duplicates and rejections) add the packages from Inbox and Treated Inbox too.
Levente
Interesting.
Cheers,
- Andreas
-- Miguel Cobá http://miguel.leugim.com.mx
At 10:53 PM -0600 1/21/10, Miguel Enrique Cobá Martinez apparently wrote:
Write tool to leverage the trunk activity then. Stop whining about losing control (via Mantis). People like a new and easy way to contribute, otherwise nobody would put any commits in trunk. Is this something that hard to understand?
Is there somewhere a list of the users that have commited changes to the trunk? Or how many people have been contributing with this new model?
I for sure have seen commits from Levente Nicolas Cellier Andreas Raab
Who else? Is an honest question.
-- Miguel Cobá http://miguel.leugim.com.mx
From the initials on the packages at <>, I see the following:
1622 packages with the following initials and number of packages.
ul - 130 tfel - 11 tbn - 3 stephaneducasse - 1 sm - 1 sd - 3 rss - 7 rkrk - 11 rej - 1 p4s - 1 nice - 475 ml - 2 mir - 1 mha - 18 md - 8 MarcoSchmidt - 1 MAD - 5 ls -1 laza - 29 klc - 2 kb -15 jmv - 35 jdr - 1 jcg -24 it - 2 Igor.Stasenko - 5 hpt - 1 HenrikSperreJohansen - 1 gsa - 1 gk - 1 enno - 1 eem - 9 edc - 41 dtl - 110 dew - 1 dc - 1 cwp - 8 cbc - 4 bs - 2 bp - 6 bf - 18 auto - 16 ar - 606 al - 2
Ken G. Brown
Hi Keith,
wrapping up:
You have a definition of "community" (package maintainers on SqueakSource) that is as controversial as the one you criticise.
You have some very concrete ideas (visions) about Squeak, the way the board and other entities should work.
In my opinion, you even have a point or two.
And in spite of all the above, you refuse to run for the board; instead you insist on continuing to blame people from a safe distance without showing the decency of being ready to take responsibility.
In one (final, as far as I'm concerned) word: pathetic.
Sorry.
Best,
Michael :-(
Hi Keith,
wrapping up:
You have a definition of "community" (package maintainers on SqueakSource) that is as controversial as the one you criticise.
You have some very concrete ideas (visions) about Squeak, the way the board and other entities should work.
In my opinion, you even have a point or two.
And in spite of all the above, you refuse to run for the board; instead you insist on continuing to blame people from a safe distance without showing the decency of being ready to take responsibility.
In one (final, as far as I'm concerned) word: pathetic.
Sorry.
Best,
Michael :-(
Michael,
what purpose would running for the board do? And secondly you don't know what you are talking about.
Firstly, there have been no elections since this incident occurred, so your point is moot.
Secondly I would simply be accused of the same conflict of interest that I am accusing Andreas of. Release teams are supposed to work on the release not on the board.
Thirdly, the board is a political entity, it always has been, and it basically shouldn't do anything, it never has done anything in the past. I would only run for the board in order to stop the board form hassling people and causing trouble.
Fourthly, as a result of the actions of the board, and other things, I don't have time.
And Fifthly, you don't ask disabled people to climb stairs. In fact it is illegal, not to put processes in place that do not enable disabled people to fully participate in the work place etc. I am not planning to run for the board because I am not emotionally up to it.
Of course if you are the kind of person, that forces people with phobias to touch tarantulas, I would call you pathetic to have that kind of expectation.
Keith
Keith,
I know I wrote 'final', so here's me being inconsequent. I've got company.
Am 21.01.2010 um 21:35 schrieb keith keith_hodges@yahoo.co.uk:
And secondly you don't know what you are talking about.
Ach, gerroff. Simpleton argument. You can do better.
Firstly, there have been no elections since this incident occurred, so your point is moot.
There's an upcoming election. Go for it if you really want things to change.
Secondly I would simply be accused of the same conflict of interest that I am accusing Andreas of. ...
Just give it a try.
... I would only run for the board in order to stop the board form hassling people and causing trouble.
Just give it a try.
Fourthly, as a result of the actions of the board, and other things, I don't have time.
Man, you have the time to write all those long laments. So what about doing something sensible instead?
... Of course if you are the kind of person, that forces people with phobias to touch tarantulas, I would call you pathetic to have that kind of expectation.
Rest assured, I am not.
And now this is, for real, my final word. Whining does really not make much of an impression. I have repeatedly told you that I agree with some of your points. I'm not all negative, you see. It is highly unfortunate that you keep accusing and blaming people without showing an ever so slight glimpse of reason or common sense. It's all tunnel vision, and that's a darn pity, if you ask me. Of course, you don't.
I will let you have the last word; you seem to be in dire need of that.
Out,
Michael :-(
On Jan 21, 2010, at 10:00 AM, keith wrote:
The Vision
The "lurkers" just want to see a new flashy image, that they might try a little project in one day.
The "community" want to see the board provide vision and to promote harmony both philosophically/ideologically, and with technical facilities; harmony among all squeak forks, even those who don't want to be harmonised (i.e. Pharo). The community members want to be able to publish a package (e.g. Magma) and have it be tested to work for everyone, whether they be pharo users or squeak users. The community wants to be able keep its large published, even deployed code bases up to date and bug free.
The "community" doesn't want only one thing, and different people in it want different things to different degrees. I don't dispute that what you have described above is desirable, in principle, to the vast majority of community members. However, it is fundamentally at odds with other goals that various community members hold dear. A balance must be struck.
Here's a very specific example. I would like to see more integrated support for concurrent programming in the Squeak kernel. Toward that end, I've added a trivial implementation of "promises" to the trunk (hopefully, I'll take it further relatively soon... one of the things I've done in the interim was to re-read Mark Miller's dissertation). The current changes are intentionally non-invasive. However, it is possible to envision widespread adoption of such programming constructs throughout the image. Any packages that use such constructs would rely on the support in the Kernel package.
How do you propose to support new programming paradigms that push us beyond Smalltalk-80, and yet have every package be loadable into every release of every fork? It's fine if your answer is that this conflicts fundamentally with your vision: compatibility is king. Just be aware that your vision is no more synonymous with the "community's" than mine or Andreas's.
It is impossible to deny that since the inception of the new development process, there has been a steady flow of high-quality contributions integrated into the trunk image. Most of these haven't been from Andreas, although he's probably the most productive single member (that's just how he is). It is unfortunate that this progress has been at the cost of some cross-fork compatibility. However, you have to admit that there is something about the new process that is enabling Nicholas, Levente and many others to contribute in a way that they weren't before. Even if it wasn't a problem with Sake/Bob per se (i.e. there were other external factors at play, like uncertainty about the 3.10/3.11/4.0 roadmaps), what's done is done; things are moving again, and that's good.
In text trimmed below, you refer to a "golden age" of Squeak development under your guidance. However, my impression is that the community has been greatly enthused by the progress that has been made since the switch. Personally, I wouldn't go so far as to say we have a golden age today, but it's certainly better than a year ago. As long as you propose that we stop using a process that is clearly "working" (for some definition of "working" compatible with what I've written above) and switch to one that is at best unproven, then it is natural and proper that your proposal will meet with resistance.
I wholeheartedly wish for the rapid healing of your hurts; your position within the community as the "extreme" advocate of cross-fork compatibility is potentially very valuable if we can achieve some of your goals without sacrificing the benefits of the new process (and, of course, I wish it simply for your own sake). It's a new decade! Let's see if we can find some common ground. I do believe that it exists.
Cheers, Josh
Let's not feed trolls.
Cuis should be discussed in cuis lists. Pharo should be discussed in pharo lists. Personal troubles for dealing with lost (financial) support should be discussed somewhere else. Personal troubles with individuals should be set up using private e-mail addresses. Certain level of language shouldn't be used in tech lists.
Since cuis & pharo parted ways with squeak-trunk, ethics require that anything not involving shared things should be discussed at proper lists.
Also, not being civilized (eg. not using proper language & adequate levels of civility) won't help employability & funding raising.
I realize that this thread is dead due to Godwin's law, but I want to revive an old branch because Keith happened not to respond to it. See below...
On Jan 21, 2010, at 11:49 AM, Josh Gargus wrote:
On Jan 21, 2010, at 10:00 AM, keith wrote:
The Vision
The "lurkers" just want to see a new flashy image, that they might try a little project in one day.
The "community" want to see the board provide vision and to promote harmony both philosophically/ideologically, and with technical facilities; harmony among all squeak forks, even those who don't want to be harmonised (i.e. Pharo). The community members want to be able to publish a package (e.g. Magma) and have it be tested to work for everyone, whether they be pharo users or squeak users. The community wants to be able keep its large published, even deployed code bases up to date and bug free.
The "community" doesn't want only one thing, and different people in it want different things to different degrees. I don't dispute that what you have described above is desirable, in principle, to the vast majority of community members. However, it is fundamentally at odds with other goals that various community members hold dear. A balance must be struck.
Here's a very specific example. I would like to see more integrated support for concurrent programming in the Squeak kernel. Toward that end, I've added a trivial implementation of "promises" to the trunk (hopefully, I'll take it further relatively soon... one of the things I've done in the interim was to re-read Mark Miller's dissertation). The current changes are intentionally non-invasive. However, it is possible to envision widespread adoption of such programming constructs throughout the image. Any packages that use such constructs would rely on the support in the Kernel package.
How do you propose to support new programming paradigms that push us beyond Smalltalk-80, and yet have every package be loadable into every release of every fork? It's fine if your answer is that this conflicts fundamentally with your vision: compatibility is king. Just be aware that your vision is no more synonymous with the "community's" than mine or Andreas's.
Keith, can you please respond to this?
I believe that the two visions are fundamentally at odds. I don't think that it is a technical shortcoming of Sake/Packages, I just think that any attempt to have universal cross-fork compatibility is fundamentally doomed to either:
1) fail, or
2) "succeed", but at the cost of preventing fundamental improvements to the programming model
It seems to me that your approach is more likely to fail in the second way, but I might be missing something. How do you propose to address this issue? I'm trying to look at things from your point of view, but I'm afraid that this really looks like a show-stopper to me.
Cheers, Josh
"Josh" == Josh Gargus josh@schwa.ca writes:
Josh> I believe that the two visions are fundamentally at odds. I don't think Josh> that it is a technical shortcoming of Sake/Packages, I just think that Josh> any attempt to have universal cross-fork compatibility is fundamentally Josh> doomed to either:
Josh> 1) fail, or
Josh> 2) "succeed", but at the cost of preventing fundamental improvements to Josh> the programming model
Indeed. One of the problems of non-trunk development is that the barrier to contribution is far higher, because each individual contributor has to understand how to make his idea *work* with *all* base images.
Whereas the model we have now, the Squeak base gets better by local commits and by borrowing things that make sense from Pharo and Cuis, even though the Pharo and Cuis committers didn't even know or care that Squeak may want to borrow it.
And Pharo is getting better by borrowing *relevant* commits from Squeak.
And I, as an individual committer to Squeak, don't have to know or care whether my patch will work on Pharo. It's up to the Pharo guys to figure that out.
This is a far better system. More commits, more progress has been made in the past six months than the previous 18 months.
Indeed. One of the problems of non-trunk development is that the barrier to contribution is far higher, because each individual contributor has to understand how to make his idea *work* with *all* base images.
What are you talking about. This is simply not true. What a ridiculous idea.
The individual contributor, merely has to consider that other people might be interested in learning about his contribution.
For example, Edgar wants to load Closures into Minimal, Minimal is based upon 3.10. The knowledge and nuances of how to perform this task is contained in the heads of 4 people as far as I know.
1. Elliot, (he wrote closures) 2. Andreas has applied this to 3.10 3. Juan has applied this to Cuis 4. Stefane has applied this to Pharo which is based on 3.9
The common factor is that all of these people have all performed their task with only the goal of doing it for themselves. None have considered that a lesser mortal like myself or Edgar, also have a NEED for this, not a want, a NEED.
Edgar and I are not really clever enough to do it for ourselves, sure we may manage to load closures, but really we wouldnt have a clue how to cope with more subtle issues. Problems that have already been solved 4 times by experts.
Whereas the model we have now, the Squeak base gets better
And Edgars work on Minimal is made obsolete.
This is a far better system. More commits, more progress has been made in the past six months than the previous 18 months.
Progress which cant be used is not actually progress.
Keith
On 1/24/10 10:18 AM, "keith" keith_hodges@yahoo.co.uk wrote:
And Edgars work on Minimal is made obsolete.
Not.
All I do sooner or later go to people who knows better. A long time ago I cook the first SqueakLight. Cuis is the some similar, some years later and very, very superior . See all about how Ralph and me do 3.10 and the ReleaseBuilderFor3dot11 class You found the rough sketch Andreas polish for unload all and have a smaller, modular image.
And remember 3.10 was the first image with Monticello packages going out...
I have Etoys reload/load a long time ago, but no perfect. Now I have the "class repository" idea and this also some clever guy polish some day and we have a more granular system. All ideas take time and talent. No need I do perfect things, but yes things some saw and discover how to have working. That's team work, a concept you don't have.
Pavel start Minimal before 3.10 start, I was helping when Ralph choose me.
So same I finish 3.10 as I could, also now follow Minimal with my crazy ideas.
That's the good side of the forks.
But at some time is best work in the main with as many guys is possible.
This is the trunk today.
If I want Closures is because so I could take others work. If I want NeXtqueak is because at some point clever guys like Pharo , Etoys , Cuis realize we was few for afford many different and divergent versions.
I call for a Reunite conference.
Bury ego!!!
All was necessary and all ideas should be listen without pre concepts.
For going Rome I need leave Rosario and you Birmingham and...
³When thou art at Rome, do as they do at Rome"
Edgar
And Edgars work on Minimal is made obsolete.
Not
It is obsolete if you don't succeed in loading closures into it.
All I do sooner or later go to people who knows better.
Exactly what I said, you have a NEED for "Minimal" to not be made obsolete, so you have a NEED to repeat the work of the guru's or solicit their help.
Wouldn't it be nice if they anticipated your NEED, since it was their need once, and it is also my NEED.
To anticipate your need, all they need to do is publish the work relative to a fixed point that is known, rather than integrating it as they go along in to a moving target. The fundamental problem is not the progress it is the moving target.
And remember 3.10 was the first image with Monticello packages going out...
I have Etoys reload/load a long time ago, but no perfect.
Here we have the same problem in the backwards direction. You had a go at unloading etoys, but managed an imperfect job. Well done for having the guts to try it.
So if you were to publish your imperfect effort as a delta against a known fixed point 3.10 release. Then the gurus, knowing you had a need, could contribute to your effort, and publish it as a final delta against the 3.10 release.
However what happens is, the gurus are off working relative to their moving target, we will call it "trunk" for the sake of argument, so they don't see your hard work as relevant to them, after all they are a guru so they know better. They easily redo your work in a day, that took you weeks, but then they publish it to their moving target.
The net result, your hard work is ignored, and you are left scavenging through their repository in the hope you can learn what you did wrong, and apply it to your moving target.
Now I have the "class repository" idea and this also some clever guy polish some day and we have a more granular system.
Actually Edgar, I think you may have been ahead of your time on this one.
Given that Cuis doesn't have Monticello I have been having similar ideas myself.
All ideas take time and talent.
Sure... the problem being that those who develop applications like me have their time and limited talent tied up in hefty loaded images, with the client screaming to just make the damn thing work.
Those who are sprinting ahead with the time and the talent to make a better kernel, are simply not in the same world.
LevelPlayingField (and 3.10-build) was invented as a place to put patches that could smooth out the difference, between what the kernel developers give us, and what we application developers actually need.
No need I do perfect things, but yes things some saw and discover how to have working. That's team work, a concept you don't have.
I have always strongly objected to this repeated accusation. I have continuously been interested in having your input, but you would never give it, you absolutely refused. You just said "he uses scripts in a website" that dont have a password (when I don't) and turned your back on the idea of joining in.
Secondly I was continuously interested in contributing to 3.10, I wrote TestReporter a non GUI test runner for SUnit , as a contribution to 3.10. Ralph ignored my contribution and wrote his own 3 months later. Which one is the team player?
I wrote Installer as a contribution to 3.10, which you included, but you included an old version... again I write stuff to be included, I get it included, but someone decides not to make sure that the best version is published in the final release. Again who is the team player here, and who is not?
I provided a framework within which you could publish the script for building FunSqueak. I asked you to put funsqueak into Sake/Packages so we could see what it contained and build and test 3.11 against FunSqueak. Damien did it for the "developer" image, and he isnt even a regular squeak user.
Pavel start Minimal before 3.10 start, I was helping when Ralph choose me.
So same I finish 3.10 as I could, also now follow Minimal with my crazy ideas.
Until you do manage to load Closures into Minimal it is obsolete, and all that time and energy you put in to it has been wasted.
Until I manage to load closures into my working image, all the time and energy I put into it, has been wasted. Or until I can load my working image into 3.11 (which I have to wait a long while for), my effort is wasted.
It appears to be that a lot of effort gets wasted around here.
That's the good side of the forks.
But at some time is best work in the main with as many guys is possible.
This is the trunk today.
It was mantis yesterday.
Only Mantis is more useful for me NOW.
Bury ego!!!
Sure, I am all in favour of that. So please...
You don't need to run for the board. Campaign instead for the board not to favour certain "egos" and to have a civilised protocol where discussion and collaboration is actually possible.
All was necessary and all ideas should be listen without pre concepts.
"trunk" is a pre-concept we are stuck with now, it was created without listening or understanding that we had already decided that this very way of working was what had caused previous problems in the past.
1. Long release turnarounds, because trying to do too much at once. 2. No small scale branching, thus provoking full scale forking. 3. Repeatedly large chunks of the community get left behind (etoys 3.8, Sophie 3.8, cobalt 3.8)
Gjallar is the only large project that I know of which hasn't been left behind, and the solution for that was Installer and LPF.
regards
Keith
Progress which cant be used is not actually progress.
From my point of view, every feature of 3.11 that you publish which is directly loadable into 3.10, I will accept as progress, because I can use it now.
Otherwise, I will have to wait over a year to port my code base and find out.
Presently it seems to me that every feature of 3.11, is a potential compatibility problem, making the port more difficult and the probability of the port happening more unlikely.
I get paid for delivering an application that works. The porting process, when it eventually happens will end up as a month or so of unpaid work, since the client doesn't care about the details.
What I don't get is how come Randal, who is an application developer himself, doesn't see this.
Do you have some other solution? Ahhh I get it. Randal is building his stuff on other peoples packages like seaside, so you get to leave the hassle of fork incompatibilities to them, its "someone else's problem". Cool solution.
So... the seaside crew announce that they are going to solve this problem by only developing for Pharo, and the Magma guy announces that he is going to solve this problem by only developing for Squeak. Pity the poor soul who wants to use Seaside with Magma
that would be me
Keith
p.s. before someone who is incapable of abstract thought pipes up and says "but seaside runs on squeak as well", and "magma runs on pharo as well" The Magma-Seaside case is "only an example" demonstrating the principle of the problem.
p.p.s now with the current state of play, I simply despair, since squeak ceases to be a viable platform to actually work in, because everyone is insisting on developing to moving targets, and I mean everyone! All you have to do to change this is conceptually develop patches only relative to specific non moving releases. Then all of the forks will naturally converge their API's, and you wont leave people behind.
2010/1/24 keith keith_hodges@yahoo.co.uk:
Progress which cant be used is not actually progress.
From my point of view, every feature of 3.11 that you publish which is directly loadable into 3.10, I will accept as progress, because I can use it now. Otherwise, I will have to wait over a year to port my code base and find out. Presently it seems to me that every feature of 3.11, is a potential compatibility problem, making the port more difficult and the probability of the port happening more unlikely. I get paid for delivering an application that works. The porting process, when it eventually happens will end up as a month or so of unpaid work, since the client doesn't care about the details. What I don't get is how come Randal, who is an application developer himself, doesn't see this. Do you have some other solution? Ahhh I get it. Randal is building his stuff on other peoples packages like seaside, so you get to leave the hassle of fork incompatibilities to them, its "someone else's problem". Cool solution. So... the seaside crew announce that they are going to solve this problem by only developing for Pharo, and the Magma guy announces that he is going to solve this problem by only developing for Squeak. Pity the poor soul who wants to use Seaside with Magma that would be me Keith p.s. before someone who is incapable of abstract thought pipes up and says "but seaside runs on squeak as well", and "magma runs on pharo as well" The Magma-Seaside case is "only an example" demonstrating the principle of the problem.
There is at least one abstraction you've become a master of: circles ;).
p.p.s now with the current state of play, I simply despair, since squeak ceases to be a viable platform to actually work in, because everyone is insisting on developing to moving targets, and I mean everyone! All you have to do to change this is conceptually develop patches only relative to specific non moving releases. Then all of the forks will naturally converge their API's, and you wont leave people behind.
Oh well, this applies to my 2cents bug patches (even these happen to be different in 3.10, trunk and Pharo) Bye bye closure, COG, etc... A 5x speed up certainly is a moving target I will learn to love.
Josh> that it is a technical shortcoming of Sake/Packages, I just think that Josh> any attempt to have universal cross-fork compatibility is fundamentally Josh> doomed to either:
Josh> 1) fail, or
Josh> 2) "succeed", but at the cost of preventing fundamental improvements to Josh> the programming model
Let's try an example. We have three forks...
Edgars-Minimal , Keith's Beach, and Pharo's.
If I develop a package, Rio, that loads into a Pharo release,
in order to load Rio into Beach, I have two choices.
1. Make the Rio code base more general, use the lowest common denominator (i.e go backwards) 2. Make Beach more like Pharo with a patch or two. (The forks are converging with the patch)
I prefer the 2nd option. Now if I patch Keith's-Beach, my patch is only useful to me. However if I patch 3.10 on which Keith's Beach is built then I am capturing useful knowledge, for all users of 3.10, including Edgar.
So along comes Edgar, he finds Rio doesn't load, however there is a patch which clearly explains the issue. He doesnt need to understand Beach, because the patch is against 3.10 which he already understands. Since he is the developer of Minimal based on 3.10 he can see how to adapt the patch for Minimal
So, now we have several benefits.
1. Rio loading in three base images, and 2. the api of the three forks are converging.
So what does this mean in practice. If you develop your next base image (aka 3.11) in smaller steps, patches against 3.10, then other forks can gain api compatability with 3.11, by choosing to become more like 3.10 (a task which is possible), rather than choosing to become more like 3.11 a task which is impossible because 3.11 isn't finished yet.
regards
Keith
2010/1/24 keith keith_hodges@yahoo.co.uk:
Josh> that it is a technical shortcoming of Sake/Packages, I just think that Josh> any attempt to have universal cross-fork compatibility is fundamentally Josh> doomed to either:
Josh> 1) fail, or
Josh> 2) "succeed", but at the cost of preventing fundamental improvements to Josh> the programming model
Let's try an example. We have three forks...
Edgars-Minimal , Keith's Beach, and Pharo's.
If I develop a package, Rio, that loads into a Pharo release,
in order to load Rio into Beach, I have two choices.
- Make the Rio code base more general, use the lowest common denominator
(i.e go backwards) 2. Make Beach more like Pharo with a patch or two. (The forks are converging with the patch)
I prefer the 2nd option. Now if I patch Keith's-Beach, my patch is only useful to me. However if I patch 3.10 on which Keith's Beach is built then I am capturing useful knowledge, for all users of 3.10, including Edgar.
So along comes Edgar, he finds Rio doesn't load, however there is a patch which clearly explains the issue. He doesnt need to understand Beach, because the patch is against 3.10 which he already understands. Since he is the developer of Minimal based on 3.10 he can see how to adapt the patch for Minimal
So, now we have several benefits.
- Rio loading in three base images, and
- the api of the three forks are converging.
So what does this mean in practice. If you develop your next base image (aka 3.11) in smaller steps, patches against 3.10, then other forks can gain api compatability with 3.11, by choosing to become more like 3.10 (a task which is possible), rather than choosing to become more like 3.11 a task which is impossible because 3.11 isn't finished yet.
regards
Keith
What's your goal ? Develop an application ? Develop a cross-fork module/package/library ?
If the later (RIO), then use what every other cross-fork do, compatibility layers (Grease...). Otherwise, concentrate on the fork you have chosen and let people port if they are interested.
If you catch a huge market like Seaside, you'll eventually be in a position to influence fork directions. Otherwise, you can also make things move by technical argumentation. If your pressure is only political, beat it, you're not in a position to win.
Nicolas
What's your goal ? Develop an application ? Develop a cross-fork module/package/library ?
Both: Develop an application using cross-fork packages.
However the way things are going, its all I can manage to do the application, keeping all the balls in the air is a problem.
If the later (RIO), then use what every other cross-fork do, compatibility layers (Grease...).
Never heard of it, I have been out of the loop for a while.
Is grease a package? why didn't whoever developed Grease, add to LPF which is already doing this? Or perhaps we just need to add Grease to LPF.
Otherwise, concentrate on the fork you have chosen and let people port if they are interested.
I haven't chosen a fork, I was going to use pharo, but found it somewhat reliant on OB which I dont get along with.
So discovering that I didnt actually have a fork that I found interesting I cam back here to see if cuis would do the trick.
this looks like it might be too much work.
So I will probably just stay where I am, earn some money and employ someone else to port things later on (you never know). In the meantime the community looses the maintainence of any of the cross-fork packages I already provide.
Keith
2010/1/24 keith keith_hodges@yahoo.co.uk:
What's your goal ? Develop an application ? Develop a cross-fork module/package/library ?
Both: Develop an application using cross-fork packages. However the way things are going, its all I can manage to do the application, keeping all the balls in the air is a problem.
If the later (RIO), then use what every other cross-fork do, compatibility layers (Grease...).
Never heard of it, I have been out of the loop for a while. Is grease a package? why didn't whoever developed Grease, add to LPF which is already doing this? Or perhaps we just need to add Grease to LPF.
AFAIK, Grease is not universal. It is for seaside cross-dialect compatibility. Not sure it works for you, maybe... but the idea should work.
Otherwise, concentrate on the fork you have chosen and let people port if they are interested.
I haven't chosen a fork, I was going to use pharo, but found it somewhat reliant on OB which I dont get along with. So discovering that I didnt actually have a fork that I found interesting I cam back here to see if cuis would do the trick. this looks like it might be too much work. So I will probably just stay where I am, earn some money and employ someone else to port things later on (you never know). In the meantime the community looses the maintainence of any of the cross-fork packages I already provide. Keith
2010/1/24 keith keith_hodges@yahoo.co.uk:
What's your goal ? Develop an application ? Develop a cross-fork module/package/library ?
Both: Develop an application using cross-fork packages. However the way things are going, its all I can manage to do the application, keeping all the balls in the air is a problem.
Developing an application means freezing a set of dependencies. If you're not freezing them, then you not developing an application, you just toying.
Because, you'll never know that might be broken, when you update dependencies to the latest, so you are always doing it at own risk (and don't blame the devs about breaking things - its absurd), because despite what package maintainer says, there are always a chance that with even little and minimal update, your application could stop working normally. This is what we calling 'moving target'. Sure thing, if package maintainer tries to preserve compatibility, then in 99% cases everything ok, but if he not - then its up to you to manually update the application to conform with newer version of package. And always will.
The only class of updates, which can be acceptable in such ecosystem is bug fixes. But if you need to move forward, like introduce new & better stuff, or remove obsolete cruft, you inevitably will meet compatibility problems.
Honestly, what kind of automation you can invent here to migrate your application from using package version A, to package version B , which potentially, can be completely rewritten from scratch? You doomed to either stay with version A, or do a lot of manual work migrating to version B and no magic tool will help you with that.
About cross-fork package development.
Each package having an environment-agnostic behavior , and environment dependent. The goal of developer is to identify these parts and connect them nicely. But this is gone too far. It is now about defining and enforcing a good practices in package development to avoid cross-dialect pitfalls. How many packages we has built with cross-compatibility in mind? How many packages having all globals declared in a single place, not scattered among code? And what automated tools can solve the problems, when your package code using 'SmalltalkImage current' instead of 'Smalltalk'?
If the later (RIO), then use what every other cross-fork do, compatibility layers (Grease...).
Never heard of it, I have been out of the loop for a while. Is grease a package? why didn't whoever developed Grease, add to LPF which is already doing this? Or perhaps we just need to add Grease to LPF.
Otherwise, concentrate on the fork you have chosen and let people port if they are interested.
I haven't chosen a fork, I was going to use pharo, but found it somewhat reliant on OB which I dont get along with. So discovering that I didnt actually have a fork that I found interesting I cam back here to see if cuis would do the trick. this looks like it might be too much work. So I will probably just stay where I am, earn some money and employ someone else to port things later on (you never know). In the meantime the community looses the maintainence of any of the cross-fork packages I already provide. Keith
2010/1/24 Igor Stasenko siguctua@gmail.com:
2010/1/24 keith keith_hodges@yahoo.co.uk:
What's your goal ? Develop an application ? Develop a cross-fork module/package/library ?
Both: Develop an application using cross-fork packages. However the way things are going, its all I can manage to do the application, keeping all the balls in the air is a problem.
Developing an application means freezing a set of dependencies. If you're not freezing them, then you not developing an application, you just toying.
Because, you'll never know that might be broken, when you update dependencies to the latest, so you are always doing it at own risk (and don't blame the devs about breaking things - its absurd), because despite what package maintainer says, there are always a chance that with even little and minimal update, your application could stop working normally. This is what we calling 'moving target'. Sure thing, if package maintainer tries to preserve compatibility, then in 99% cases everything ok, but if he not - then its up to you to manually update the application to conform with newer version of package. And always will.
The only class of updates, which can be acceptable in such ecosystem is bug fixes. But if you need to move forward, like introduce new & better stuff, or remove obsolete cruft, you inevitably will meet compatibility problems.
Honestly, what kind of automation you can invent here to migrate your application from using package version A, to package version B , which potentially, can be completely rewritten from scratch? You doomed to either stay with version A, or do a lot of manual work migrating to version B and no magic tool will help you with that.
About cross-fork package development.
Each package having an environment-agnostic behavior , and environment dependent. The goal of developer is to identify these parts and connect them nicely. But this is gone too far. It is now about defining and enforcing a good practices in package development to avoid cross-dialect pitfalls. How many packages we has built with cross-compatibility in mind? How many packages having all globals declared in a single place, not scattered among code? And what automated tools can solve the problems, when your package code using 'SmalltalkImage current' instead of 'Smalltalk'?
If the later (RIO), then use what every other cross-fork do, compatibility layers (Grease...).
Never heard of it, I have been out of the loop for a while. Is grease a package? why didn't whoever developed Grease, add to LPF which is already doing this? Or perhaps we just need to add Grease to LPF.
Otherwise, concentrate on the fork you have chosen and let people port if they are interested.
I haven't chosen a fork, I was going to use pharo, but found it somewhat reliant on OB which I dont get along with. So discovering that I didnt actually have a fork that I found interesting I cam back here to see if cuis would do the trick. this looks like it might be too much work. So I will probably just stay where I am, earn some money and employ someone else to port things later on (you never know). In the meantime the community looses the maintainence of any of the cross-fork packages I already provide. Keith
-- Best regards, Igor Stasenko AKA sig.
Sure. Continuous integration is not that easy. However, I would try to make new developments in the bleeding edge version for these reasons: - react early to uncompatible changes - propose hooks to reduce the level of kernel patching You can't control everything but it's better to keep an eye opened rather than discovering unwanted changes lately. And it's even better if you can bend some developments toward your own direction.
Nicolas
Sure. Continuous integration is not that easy. However, I would try to make new developments in the bleeding edge version for these reasons:
- react early to uncompatible changes
- propose hooks to reduce the level of kernel patching
You can't control everything but it's better to keep an eye opened rather than discovering unwanted changes lately. And it's even better if you can bend some developments toward your own direction.
Nicolas
I totally agree. It is much better to not get left behind too much.
However the bending bit, you suggest is virtually impossible in this gang, that is why LPF exists, because any fix you suggest to the image, you will have to wait 2 years for integration into a release. That is also why Sake/Packages exists so you can patch other people's stuff a bit.
Keith
2010/1/24 Nicolas Cellier nicolas.cellier.aka.nice@gmail.com:
Sure. Continuous integration is not that easy. However, I would try to make new developments in the bleeding edge version for these reasons:
- react early to uncompatible changes
- propose hooks to reduce the level of kernel patching
You can't control everything but it's better to keep an eye opened rather than discovering unwanted changes lately. And it's even better if you can bend some developments toward your own direction.
Well, you can. But i think , then it is hard to separate the goals of what you want from your application and what you want from package (or system) in general. This could lead to inversion of your model, where design of base packages tends to serve only for your application and almost nothing else. We have a lot of such kind of things in squeak, where a package which should provide a generic interfaces, concentrating on some obscure bit of functionality, which used only in a single place of some dependent application/package.
Nicolas
2010/1/24 Igor Stasenko siguctua@gmail.com:
2010/1/24 Nicolas Cellier nicolas.cellier.aka.nice@gmail.com:
Sure. Continuous integration is not that easy. However, I would try to make new developments in the bleeding edge version for these reasons:
- react early to uncompatible changes
- propose hooks to reduce the level of kernel patching
You can't control everything but it's better to keep an eye opened rather than discovering unwanted changes lately. And it's even better if you can bend some developments toward your own direction.
Well, you can. But i think , then it is hard to separate the goals of what you want from your application and what you want from package (or system) in general. This could lead to inversion of your model, where design of base packages tends to serve only for your application and almost nothing else. We have a lot of such kind of things in squeak, where a package which should provide a generic interfaces, concentrating on some obscure bit of functionality, which used only in a single place of some dependent application/package.
Since you can't fully control, you also have to convince.
Nicolas
Nicolas
-- Best regards, Igor Stasenko AKA sig.
Honestly, what kind of automation you can invent here to migrate your application from using package version A, to package version B , which potentially, can be completely rewritten from scratch? You doomed to either stay with version A, or do a lot of manual work migrating to version B and no magic tool will help you with that.
No automation at all is needed.
About cross-fork package development.
Each package having an environment-agnostic behavior , and environment dependent. The goal of developer is to identify these parts and connect them nicely. But this is gone too far. It is now about defining and enforcing a good practices in package development to avoid cross-dialect pitfalls. How many packages we has built with cross-compatibility in mind? How many packages having all globals declared in a single place, not scattered among code? And what automated tools can solve the problems, when your package code using 'SmalltalkImage current' instead of 'Smalltalk'?
Again no automated tools.
I think you are getting hung up on the automated tools thing. Automation is nothing to do with this discussion.
I am not asking you or anyone to automate tools or anything.
I am asking you to use a process of thinking that documents what you do, in a way that enables it to be routinely harvestable. Its a process of knowledge capture. You can write it on paper if you like:
Example:
If you write me an explanation of how to install closures, it will be a lot easier to write if you say.... Start with a 3.10-release image. Do a,b,c,d,e and you will have a closures image, we shall call 3.10- closures.
As a trunk developer you cant write me this explanation. If you write me an explanation of how to install closures following trunk, it will be... Hmm when we did it we had this image that was hacked on by 10 people, so this is how I did a,b,c,d,e but I cant be sure that this will work for you.
that is the difference, and my example from the bazaar documentation clearly shows the difference
Keith
On Sun, Jan 24, 2010 at 10:59 AM, keith keith_hodges@yahoo.co.uk wrote:
Honestly, what kind of automation you can invent here to migrate your application from using package version A, to package version B , which potentially, can be completely rewritten from scratch? You doomed to either stay with version A, or do a lot of manual work migrating to version B and no magic tool will help you with that.
No automation at all is needed.
About cross-fork package development.
Each package having an environment-agnostic behavior , and environment dependent. The goal of developer is to identify these parts and connect them nicely. But this is gone too far. It is now about defining and enforcing a good practices in package development to avoid cross-dialect pitfalls. How many packages we has built with cross-compatibility in mind? How many packages having all globals declared in a single place, not scattered among code? And what automated tools can solve the problems, when your package code using 'SmalltalkImage current' instead of 'Smalltalk'?
Again no automated tools.
I think you are getting hung up on the automated tools thing. Automation is nothing to do with this discussion.
I am not asking you or anyone to automate tools or anything.
I am asking you to use a process of thinking that documents what you do, in a way that enables it to be routinely harvestable. Its a process of knowledge capture. You can write it on paper if you like:
Example:
If you write me an explanation of how to install closures, it will be a lot easier to write if you say.... Start with a 3.10-release image. Do a,b,c,d,e and you will have a closures image, we shall call 3.10-closures.
I did that already. See http://www.mirandabanda.org/files/Cog/Closures0811/Bootstrap/README.txt from http://www.mirandabanda.org/cogblog/downloads/. There are two problems with this.
1. the closure bootstrap is tricky (ask Juan). The closure bootstrap has to replace the compiler in just the right order so that the old compiler keeps working until the new compiler is ready to take over. Turns out that this is very sensitive on what you start from. The ocmpiler in 37 is different form that in 3.8 and 3.9. The compiler in a Croquet mage is different. Things are different if FFI is loaded, if Vassili's variadic ifNotNil: code is in or not, if pragmas are in or not. I provided two different bootstraps Croquet 1.0& Squeak 3.9 and did a third one in-house at Teleplace. I don't have the time to do a generic one and in fact I don't think it's practicable.
2. the closure code has evolved since the bootstrap. People found bugs and provided tests and I changed the closure analysis to compile inlined blocks (to:by:do: whileTrue: et al) correctly, and fixed debugger bugs, and provided support for closurised compressed temp names. Then Igor reimplemented the compressed temp names/surce pointer scheme to mase it much better and much more general. I discovered Colin Putney had done a really nice compiler error formaework that wasn't in the original. Now two things follow
a) it is simply way too expensive to go back and revamp those two bootstraps so that they end up at the new bugfixed improved code.
b) it is pointless; Pharo, Cuis and Squeak trunk have all moved on from their pre-closure starting point. What they need is incremental bug fixes installing in their current state.
So what are the instructions to install closures now? As I'm planning to do this for Etoys sometime soon this is not hypothetical. The basic approach is to compare the starting point with the desired end-point packages (choose Compiler & Kernel-Methods and sundry associated extensions, being familiar with the compiler will make this easier, but its tediously error-prone). Then produce a set of file-ins that gets from the starting-point to as close to the end-pont ass results in a functioning compiler and Monticello. Load the relevant packages and you're done.
And the way I do that is that I file-in until something breaks (as it will, an MNU in the compiler will typically break the debugger because the debugger uses the compiler to create the source map for a method). When something breaks (usually the system locks up with an infinite recursion) I kill it, restart and use recently logged changes to find out what was the last method that I brought in that broke things and then I try and figure out why. SqueakDebug.log probably didn't get the error written to it because the crash was so deep-seated, so some head scratching is required. Then rinse and repeat. Eventually one has closures filed-in.
So - it isn't automatable, it isn't trivial and it isn't very hard; just tedious. Trying to do this will teach you much about the core of the system. People have worked out how to do this without asking me a single question, working out the above process themselves because its obvious. Ask Juan.
So why don't you have a go?
As a trunk developer you cant write me this explanation. If you write me an explanation of how to install closures following trunk, it will be... Hmm when we did it we had this image that was hacked on by 10 people, so this is how I did a,b,c,d,e but I cant be sure that this will work for you.
Bollocks.
that is the difference, and my example from the bazaar documentation clearly shows the difference
Keith
As a trunk developer you cant write me this explanation. If you write me an explanation of how to install closures following trunk, it will be... Hmm when we did it we had this image that was hacked on by 10 people, so this is how I did a,b,c,d,e but I cant be sure that this will work for you.
Bollocks.
That is precisely the explanation you just gave me, you said, "load these things, but results may vary."
You didn't say, "load this and it will work, because I tested it."
Keith
On Jan 24, 2010, at 2:04 PM, keith wrote:
As a trunk developer you cant write me this explanation. If you write me an explanation of how to install closures following trunk, it will be... Hmm when we did it we had this image that was hacked on by 10 people, so this is how I did a,b,c,d,e but I cant be sure that this will work for you.
Bollocks.
That is precisely the explanation you just gave me, you said, "load these things, but results may vary."
You didn't say, "load this and it will work, because I tested it."
He said "results may vary" because he didn't test it, not (as you imply, using clearly fallacious logic) because it would be impossible to write an explanation for trunk.
The reason that your straw man is incorrect is that, if he set out to write a set of instructions, it would be against a known version (probably Squeak3.11-8931-alpha, released yesterday). It would not be against some random image with unknown code in it. Such a set of instructions would be no more difficult to create than for 3.9 or Cobalt.
I'm very suspicious of all of your arguments of the form "I'll have to wait a year for the next release", because trunk images have been coming out at the rate of 1-2 per month since September. Why can't you try to load your codebase against each new trunk release, and holler if compatibility is gratuitously broken? Probably the breakage was accidental, and can easily be rectified.
Cheers, Josh
Keith
Hi Eliot,
If you write me an explanation of how to install closures, it will be a lot easier to write if you say.... Start with a 3.10-release image. Do a,b,c,d,e and you will have a closures image, we shall call 3.10-closures.
I did that already. See http://www.mirandabanda.org/files/Cog/Closures0811/Bootstrap/README.txt from http://www.mirandabanda.org/cogblog/downloads/. There are two problems with this.
thanks for these
- the closure bootstrap is tricky (ask Juan). The closure
bootstrap has to replace the compiler in just the right order so that the old compiler keeps working until the new compiler is ready to take over. Turns out that this is very sensitive on what you start from. The ocmpiler in 37 is different form that in 3.8 and 3.9. The compiler in a Croquet mage is different. Things are different if FFI is loaded, if Vassili's variadic ifNotNil: code is in or not, if pragmas are in or not. I provided two different bootstraps Croquet 1.0& Squeak 3.9 and did a third one in-house at Teleplace. I don't have the time to do a generic one and in fact I don't think it's practicable.
Agreed, however I do think that this should be motivation to adopt AtomicLoading, it was one of our top priorities. It does work you know. It is just traits that are not supported.
Do you think that this would help?
- the closure code has evolved since the bootstrap. People found
bugs and provided tests and I changed the closure analysis to compile inlined blocks (to:by:do: whileTrue: et al) correctly, and fixed debugger bugs, and provided support for closurised compressed temp names. Then Igor reimplemented the compressed temp names/surce pointer scheme to mase it much better and much more general. I discovered Colin Putney had done a really nice compiler error formaework that wasn't in the original. Now two things follow
a) it is simply way too expensive to go back and revamp those two bootstraps so that they end up at the new bugfixed improved code.
b) it is pointless; Pharo, and Cuis
Can allegedly be rebuilt on top of their pre-closure starting point.
and Squeak trunk have all moved on from their pre-closure starting point. What they need is incremental bug fixes installing in their current state.
So what are the instructions to install closures now? As I'm planning to do this for Etoys sometime soon this is not hypothetical. The basic approach is to compare the starting point with the desired end-point packages (choose Compiler & Kernel- Methods and sundry associated extensions, being familiar with the compiler will make this easier, but its tediously error-prone). Then produce a set of file-ins that gets from the starting-point to as close to the end-pont ass results in a functioning compiler and Monticello. Load the relevant packages and you're done.
How about, LPF loads MC1.5 into 3.8 and etoys2.
So if I am not mistaken porting SystemEditor to 3.8 (if matthew hasnt already done it for cobalt), will provide AtomicLoading to, 3.8, etoys2, sophie, and Cobalt, amongst others.
or am I an incurable optimist?
regards
Keith
On Sun, Jan 24, 2010 at 4:12 PM, keith keith_hodges@yahoo.co.uk wrote:
Hi Eliot,
If you write me an explanation of how to install closures, it will be a lot easier to write if you say.... Start with a 3.10-release image. Do a,b,c,d,e and you will have a closures image, we shall call 3.10-closures.
I did that already. See http://www.mirandabanda.org/files/Cog/Closures0811/Bootstrap/README.txtfrom http://www.mirandabanda.org/cogblog/downloads/. There are two problems with this.
thanks for these
- the closure bootstrap is tricky (ask Juan). The closure bootstrap has
to replace the compiler in just the right order so that the old compiler keeps working until the new compiler is ready to take over. Turns out that this is very sensitive on what you start from. The ocmpiler in 37 is different form that in 3.8 and 3.9. The compiler in a Croquet mage is different. Things are different if FFI is loaded, if Vassili's variadic ifNotNil: code is in or not, if pragmas are in or not. I provided two different bootstraps Croquet 1.0& Squeak 3.9 and did a third one in-house at Teleplace. I don't have the time to do a generic one and in fact I don't think it's practicable.
Agreed, however I do think that this should be motivation to adopt AtomicLoading, it was one of our top priorities. It does work you know. It is just traits that are not supported.
Do you think that this would help?
Yes, in those versions of Monticello that support it, and only if one can atomically load more than one package (I forgot to say that the closure compiler changes also touch System and Tools, not just Compiler and Kernel-Methods. So relying on solving a really hard problem (atomic loading) before one can solve a smaller problem (closures) wasn't a viable option this time around. But when it is available then of course it's the right approach, _provided_ every distro wants the same Kernel, System, Tools and Compiler and... they all differ, some (System, Tools) significantly. Oops.
2. the closure code has evolved since the bootstrap. People found bugs and
provided tests and I changed the closure analysis to compile inlined blocks (to:by:do: whileTrue: et al) correctly, and fixed debugger bugs, and provided support for closurised compressed temp names. Then Igor reimplemented the compressed temp names/surce pointer scheme to mase it much better and much more general. I discovered Colin Putney had done a really nice compiler error formaework that wasn't in the original. Now two things follow
a) it is simply way too expensive to go back and revamp those two bootstraps so that they end up at the new bugfixed improved code.
b) it is pointless; Pharo, and Cuis
Can allegedly be rebuilt on top of their pre-closure starting point.
I doubt that very much. Of course it is possible, but not without hard work. A naive install of the changed packages will fall over horribly (for reasons I allude to above). At least you have to use a closure-enabled VM.
and Squeak trunk
have all moved on from their pre-closure starting point. What they need is incremental bug fixes installing in their current state.
So what are the instructions to install closures now? As I'm planning to do this for Etoys sometime soon this is not hypothetical. The basic approach is to compare the starting point with the desired end-point packages (choose Compiler & Kernel-Methods and sundry associated extensions, being familiar with the compiler will make this easier, but its tediously error-prone). Then produce a set of file-ins that gets from the starting-point to as close to the end-point as results in a functioning compiler and Monticello. Load the relevant packages and you're done.
How about, LPF loads MC1.5 into 3.8 and etoys2.
So if I am not mistaken porting SystemEditor to 3.8 (if matthew hasnt already done it for cobalt), will provide AtomicLoading to, 3.8, etoys2, sophie, and Cobalt, amongst others.
I don't know, try it (porting SystemEditor and then trying to atomically load closures) and see. You still have to do a merge of the bits of Compiler, Kernel, System and Tools affected by closures to know what to load atomically.
The approach I've been taking does that merge as I go along.
or am I an incurable optimist?
I could care less ;)
regards
Keith
Randal L. Schwartz wrote:
"Josh" == Josh Gargus josh@schwa.ca writes:
Josh> I believe that the two visions are fundamentally at odds. I don't think Josh> that it is a technical shortcoming of Sake/Packages, I just think that Josh> any attempt to have universal cross-fork compatibility is fundamentally Josh> doomed to either:
Josh> 1) fail, or
Josh> 2) "succeed", but at the cost of preventing fundamental improvements to Josh> the programming model
Indeed. One of the problems of non-trunk development is that the barrier to contribution is far higher, because each individual contributor has to understand how to make his idea *work* with *all* base images.
Whereas the model we have now, the Squeak base gets better by local commits and by borrowing things that make sense from Pharo and Cuis, even though the Pharo and Cuis committers didn't even know or care that Squeak may want to borrow it.
And Pharo is getting better by borrowing *relevant* commits from Squeak.
And I, as an individual committer to Squeak, don't have to know or care whether my patch will work on Pharo. It's up to the Pharo guys to figure that out.
This is a far better system. More commits, more progress has been made in the past six months than the previous 18 months.
What's boggling me about this whole brouhaha is this: surely our situation - several similar-but-not-identical Smalltalks - is pretty much like the BSD world?
It's not up to, say, the FreeBSD developers to make sure that the ports stay working. That's what port maintainers are for.
The port maintainer of, say, curl, then needs to make sure that curl works nicely on FreeBSD. Ditto for the NetBSD maintainer (who might, of course, be the same guy).
People who actually write the packages - the curl developers, in this example - either care about their software running everywhere, in which case they stick to standards and try minimise platform specific stuff, or they don't.
Of course the various roles don't just blindly muck about, but I hope we don't need to keep actually saying "and the person tries to communicate with the other people in the ecosystem".
frank
Josh Gargus josh@schwa.ca writes:
The "community" doesn't want only one thing, and different people in it want different things to different degrees. I don't dispute that what you have described above is desirable, in principle, to the vast majority of community members. However, it is fundamentally at odds with other goals that various community members hold dear. A balance must be struck.
You are right, but let's as it that way: - how many of you do activly work in the "Kernel!" - how many of you do use it for application development
I would be suprised to see a ratio much higher than 1:10 000 or even 1: 100 000 (kernel dev/application dev).
As I understand Keiths posting he's mainly an application developer and so it's clear that he does not like to re-write his code over and over again (for whatever good/bad technical reason).
I just can tell you a story from Eiffel wonderland where this ratia surely was much more in favour of "application" developers. One development team in Eiffel has broken old code with nearly every "minor" update. This means software once written and "working" just stops. If you ever have encountered that, you surely will understand Keiths points very well.
There's IMHO no better way to drive away people but to break their code over and over again...
Here's a very specific example. I would like to see more integrated support for concurrent programming in the Squeak kernel. Toward that end, I've added a trivial implementation of "promises" to the trunk (hopefully, I'll take it further relatively soon... one of the things I've done in the interim was to re-read Mark Miller's dissertation).
Well so you are interested in another thing. Well so you probably do not see the points of Keiths mails.
Regards Friedrich
2010/1/24 Friedrich Dominicus frido@q-software-solutions.de:
Josh Gargus josh@schwa.ca writes:
The "community" doesn't want only one thing, and different people in it want different things to different degrees. I don't dispute that what you have described above is desirable, in principle, to the vast majority of community members. However, it is fundamentally at odds with other goals that various community members hold dear. A balance must be struck.
You are right, but let's as it that way:
- how many of you do activly work in the "Kernel!"
- how many of you do use it for application development
I would be suprised to see a ratio much higher than 1:10 000 or even 1: 100 000 (kernel dev/application dev).
As I understand Keiths posting he's mainly an application developer and so it's clear that he does not like to re-write his code over and over again (for whatever good/bad technical reason).
I just can tell you a story from Eiffel wonderland where this ratia surely was much more in favour of "application" developers. One development team in Eiffel has broken old code with nearly every "minor" update. This means software once written and "working" just stops. If you ever have encountered that, you surely will understand Keiths points very well.
There's IMHO no better way to drive away people but to break their code over and over again...
Here's a very specific example. I would like to see more integrated support for concurrent programming in the Squeak kernel. Toward that end, I've added a trivial implementation of "promises" to the trunk (hopefully, I'll take it further relatively soon... one of the things I've done in the interim was to re-read Mark Miller's dissertation).
Well so you are interested in another thing. Well so you probably do not see the points of Keiths mails.
Regards Friedrich
What I really would like is to hear about REAL compatibility problem and not supposed compatibility problems. That would be helpful. Application developper SHOULD raise their voice on technical issues. Endless political conversations on what would be a perfect Squeak in a perfect world is just irrelevant to me: it won't lead anywhere. Since I don't see much requests in this list, shall I conclude that either Squeak-trunk is not used for application dev. or that there is no major compatibility problem ?
Nicolas
-- Q-Software Solutions GmbH; Sitz: Bruchsal; Registergericht: Mannheim Registriernummer: HRB232138; Geschaeftsfuehrer: Friedrich Dominicus
What I really would like is to hear about REAL compatibility problem and not supposed compatibility problems.
The real compatibility problem is this.
I am working in a 3.10 based image NOW.
I wont have the opportunity to move my code base and find out what the compatibility problems are until 3.11 is released in 6 months time.
In that 6 months I will have generated a considerable amount of code, that may be incompatible with 3.11 but I wont have known it. I will also have been saving my packages to squeaksource, but I will not know that these packages are not useful to 3.11 users. All the 3.11 users will load my packages and will either complain or write my code off as "it never works", and I will never find out.
So, when I finally move my working image to 3.11, all my deployed production images are still in 3.10. So now I have to maintain 2 code bases of my packages. However if I could load a couple of patches into 3.10 that make them API equivalent, I wouldnt have to. Ok to do this I have to manually reverse-engineer the whole of the 3.11 effort. In short it is too difficult to contemplate, because non of the 3.11 effort was done with the expectation that it would need to be re- engineered.
This is what happened in the case of ifNotNil: ifNotNilDo: merger.
So, here we have an opportunity, Josh, could make his futures work, loadable into 3.10. Then I can start using this new wizzy api now in my existing code base, and I wont have to wait a year to be able to give him feedback.
If Elliot or Andreas or someone, could make closures loadable into 3.10 as a separate entity, then that would be of GREAT benefit to me and to Edgar apparently. I would also like the long changes file fixes (but someone told me they are available on mantis)
So in summary, I will not be ready to tell you what the compatibility problems are until about 3 months after the release of 3.11 is finished. At which point the only person who can fix them, will be me, for myself only.
Keith
On Jan 24, 2010, at 4:44 AM, keith wrote:
What I really would like is to hear about REAL compatibility problem and not supposed compatibility problems.
The real compatibility problem is this.
I am working in a 3.10 based image NOW.
I wont have the opportunity to move my code base and find out what the compatibility problems are until 3.11 is released in 6 months time.
I don't understand why this is true. In another post you've described the comprehensive build/test framework that you currently use, and I observed that I cant see any reason why you can't run it against the bi-monthly trunk image releases (or for that matter, against nightly trunk images that you automatically update).
In that 6 months I will have generated a considerable amount of code, that may be incompatible with 3.11 but I wont have known it. I will also have been saving my packages to squeaksource, but I will not know that these packages are not useful to 3.11 users. All the 3.11 users will load my packages and will either complain or write my code off as "it never works", and I will never find out.
So, when I finally move my working image to 3.11, all my deployed production images are still in 3.10. So now I have to maintain 2 code bases of my packages. However if I could load a couple of patches into 3.10 that make them API equivalent, I wouldnt have to. Ok to do this I have to manually reverse-engineer the whole of the 3.11 effort. In short it is too difficult to contemplate, because non of the 3.11 effort was done with the expectation that it would need to be re-engineered.
This is what happened in the case of ifNotNil: ifNotNilDo: merger.
I'm not sure what you're referring to. The old method still exists, so no packages that you load can break from it. What am I missing?
You were probably just unaware that #ifNotNilDo: still existed. Let's imagine for a moment that Nicholas was overzealous and removed it. If you were to run your automated tests against the latest trunk image, a lot of code would presumably break. You'd look at the failure logs, maybe curse (studies show it makes you feel better!), and fire off a civil email to the list. By the next day, #ifNotNilDo: would exist again. Problem solved.
Cheers, Josh
This is what happened in the case of ifNotNil: ifNotNilDo: merger.
I'm not sure what you're referring to. The old method still exists, so no packages that you load can break from it. What am I missing?
A new package that does not know that ifNotNil: [ :value | ] is invalid in 3.8 will not load or compile in 3.8. So you promote compatibility and the ability to migrate, by fixing the OLD image, and migrating the code to the new API there.
The advantage of this being that your code base can move forward in situ, and your packages dont have to use the old api, and you can maintain one codebase for all squeak images ever.
You were probably just unaware that #ifNotNilDo: still existed.
I know it still exists.
Keith
keith wrote:
This is what happened in the case of ifNotNil: ifNotNilDo: merger.
I'm not sure what you're referring to. The old method still exists, so no packages that you load can break from it. What am I missing?
A new package that does not know that ifNotNil: [ :value | ] is invalid in 3.8 will not load or compile in 3.8. So you promote compatibility and the ability to migrate, by fixing the OLD image, and migrating the code to the new API there.
The advantage of this being that your code base can move forward in situ, and your packages dont have to use the old api, and you can maintain one codebase for all squeak images ever.
You were probably just unaware that #ifNotNilDo: still existed.
I know it still exists.
Keith
Hi Folks,
(reposted, in the hope of not being ignored)
Package developers want their work to run on various Squeak versions and variants, without needing rewrite. Same for app developers.
Base image builders want to be free of the need to provide backwards compatibility.
This is what I suggest: A package assumes it can use a set of apis of the Squeak (/Pharo/Cuis/Etoys/Tweak/Cobalt/etc) environment. Those assumptions should be made explicit, in the form of tests. So, for example, for collections, some package developer might require the "Common Collection API tests" to pass. Then, if his package fails to run, let's say in Cuis, he would run the tests for the apis he needs. If some test fails, he could say "Cuis developers, you're not supporting api XXX", end expect them to fix the issue. But if no test fails, he needs to either modify his code so it doesn't use not-standarized apis, or he could negotiate with (all) base image developers the addition of a new api or use case to the test suite and the base images.
Building these suites is quite some work, mostly to be done by package developers. But it can easily point out responsibilities and duties. It frees package developers of needing to have a deep knowledge of various base images. And it frees base image developers from needing to know details about an unbounded set of external packages. Besides, it puts popular packages that everybody wants to support on equal footing with less-known packages. It also lets base image developers say "we support Common APIs xxx, yyy, zzz, etc.".
All what I say about base images could also apply to packages that offer services to other packages: There could also be test suites to specify their services, and allow users to switch versions of the packages they use knowing what to expect.
What do you think?
Cheers, Juan Vuletich
2010/1/25 Juan Vuletich juan@jvuletich.org:
keith wrote:
This is what happened in the case of ifNotNil: ifNotNilDo: merger.
I'm not sure what you're referring to. The old method still exists, so no packages that you load can break from it. What am I missing?
A new package that does not know that ifNotNil: [ :value | ] is invalid in 3.8 will not load or compile in 3.8. So you promote compatibility and the ability to migrate, by fixing the OLD image, and migrating the code to the new API there.
The advantage of this being that your code base can move forward in situ, and your packages dont have to use the old api, and you can maintain one codebase for all squeak images ever.
You were probably just unaware that #ifNotNilDo: still existed.
I know it still exists.
Keith
Hi Folks,
(reposted, in the hope of not being ignored)
Package developers want their work to run on various Squeak versions and variants, without needing rewrite. Same for app developers.
Base image builders want to be free of the need to provide backwards compatibility.
This is what I suggest: A package assumes it can use a set of apis of the Squeak (/Pharo/Cuis/Etoys/Tweak/Cobalt/etc) environment. Those assumptions should be made explicit, in the form of tests. So, for example, for collections, some package developer might require the "Common Collection API tests" to pass. Then, if his package fails to run, let's say in Cuis, he would run the tests for the apis he needs. If some test fails, he could say "Cuis developers, you're not supporting api XXX", end expect them to fix the issue. But if no test fails, he needs to either modify his code so it doesn't use not-standarized apis, or he could negotiate with (all) base image developers the addition of a new api or use case to the test suite and the base images.
Building these suites is quite some work, mostly to be done by package developers. But it can easily point out responsibilities and duties. It frees package developers of needing to have a deep knowledge of various base images. And it frees base image developers from needing to know details about an unbounded set of external packages. Besides, it puts popular packages that everybody wants to support on equal footing with less-known packages. It also lets base image developers say "we support Common APIs xxx, yyy, zzz, etc.".
All what I say about base images could also apply to packages that offer services to other packages: There could also be test suites to specify their services, and allow users to switch versions of the packages they use knowing what to expect.
What do you think?
Cheers, Juan Vuletich
A quick-cheap analysis could be performed: - list of classes extended by your packages - list of classes subclassed by your packages - list of methods used but not implemented by your packages With type inference (Roel or other), could be possible to get more
could lead to tests like: self assertHasClassNamed: #Array. self assertClassNamed: #Array canUnderstand: #collect:. "If you can infer type" self assertHasMessage: #at:put:. "if you cannot..." etc... Doesn't that exists ?
Of course, it should operate on a Set of packages...
Nicolas
On Mon, Jan 25, 2010 at 08:47:20PM +0100, Nicolas Cellier wrote:
2010/1/25 Juan Vuletich juan@jvuletich.org:
Hi Folks,
(reposted, in the hope of not being ignored)
Package developers want their work to run on various Squeak versions and variants, without needing rewrite. Same for app developers.
Base image builders want to be free of the need to provide backwards compatibility.
This is what I suggest: A package assumes it can use a set of apis of the Squeak (/Pharo/Cuis/Etoys/Tweak/Cobalt/etc) environment. Those assumptions should be made explicit, in the form of tests. So, for example, for collections, some package developer might require the "Common Collection API tests" to pass. Then, if his package fails to run, let's say in Cuis, he would run the tests for the apis he needs. If some test fails, he could say "Cuis developers, you're not supporting api XXX", end expect them to fix the issue. But if no test fails, he needs to either modify his code so it doesn't use not-standarized apis, or he could negotiate with (all) base image developers the addition of a new api or use case to the test suite and the base images.
Building these suites is quite some work, mostly to be done by package developers. But it can easily point out responsibilities and duties. It frees package developers of needing to have a deep knowledge of various base images. And it frees base image developers from needing to know details about an unbounded set of external packages. Besides, it puts popular packages that everybody wants to support on equal footing with less-known packages. It also lets base image developers say "we support Common APIs xxx, yyy, zzz, etc.".
All what I say about base images could also apply to packages that offer services to other packages: There could also be test suites to specify their services, and allow users to switch versions of the packages they use knowing what to expect.
What do you think?
Cheers, Juan Vuletich
A quick-cheap analysis could be performed:
- list of classes extended by your packages
- list of classes subclassed by your packages
- list of methods used but not implemented by your packages
With type inference (Roel or other), could be possible to get more
could lead to tests like: self assertHasClassNamed: #Array. self assertClassNamed: #Array canUnderstand: #collect:. "If you can infer type" self assertHasMessage: #at:put:. "if you cannot..." etc... Doesn't that exists ?
Of course, it should operate on a Set of packages...
I like Juan's idea a lot, but I lost some enthusiasm when I got to the part about it being a lot of work ;-)
Maybe by starting with the "quick-cheap" analysis that Nicolas suggests, it might be manageable.
I think that it would be important that the work be done in small chunks that can be contributed easily. We need to consider who is doing the work, and why they would be motivated to spend time on it. For example, the OSProcess package that I maintain (and I don't know if this is a good example) already has a large set of unit tests that fail right away if an expected interface changes. I would be willing to put some work into writing new tests that document just the api expectations alone, but I would not want to sink a large amount of time into it because it's likely to be boring work that does not provide much additional benefit to me.
So I like the idea, but let's keep it as simple and easy as possible.
Dave
On 2010-01-25, at 1:39 PM, David T. Lewis wrote:
I like Juan's idea a lot, but I lost some enthusiasm when I got to the part about it being a lot of work ;-)
Maybe by starting with the "quick-cheap" analysis that Nicolas suggests, it might be manageable.
I think that it would be important that the work be done in small chunks that can be contributed easily. We need to consider who is doing the work, and why they would be motivated to spend time on it. For example, the OSProcess package that I maintain (and I don't know if this is a good example) already has a large set of unit tests that fail right away if an expected interface changes. I would be willing to put some work into writing new tests that document just the api expectations alone, but I would not want to sink a large amount of time into it because it's likely to be boring work that does not provide much additional benefit to me.
I think you've hit the nail on the head here. Tests are indeed useful, but they work best when they test the functionality of interest. The base-level APIs are only interesting insofar as they affect the functionality that OSProcess provides. If you have a solid set of tests for OSProcess, and they all pass, who cares about the APIs?
From a more practical perspective, writing tests for OSProcess directly is simply easier. You can pin down the functionality you're after. (If you can't, why the heck are you writing it?) The environment that OSProcess expects to run in is much harder to specify. Should you, say, test that Dictionary implements #at:put:? Or is that assumed to be so universal in a Smalltalk implementation that it's not worth testing? Trying to specify exactly what OSProcess expects from its environment is an exercise in frustration. The only way to do it is to do a port, and see what breaks. This is what the Grease developers have done, and even limited to things that have proven to be portability issues it's a big task.
In summary, I think a better approach is to write lots of good tests for your package, and rely on them to tell you if the environment isn't what is needed.
Colin
On Jan 25, 2010, at 8:02 PM, Colin Putney wrote:
On 2010-01-25, at 1:39 PM, David T. Lewis wrote:
I like Juan's idea a lot, but I lost some enthusiasm when I got to the part about it being a lot of work ;-)
Maybe by starting with the "quick-cheap" analysis that Nicolas suggests, it might be manageable.
I think that it would be important that the work be done in small chunks that can be contributed easily. We need to consider who is doing the work, and why they would be motivated to spend time on it. For example, the OSProcess package that I maintain (and I don't know if this is a good example) already has a large set of unit tests that fail right away if an expected interface changes. I would be willing to put some work into writing new tests that document just the api expectations alone, but I would not want to sink a large amount of time into it because it's likely to be boring work that does not provide much additional benefit to me.
I think you've hit the nail on the head here. Tests are indeed useful, but they work best when they test the functionality of interest. The base-level APIs are only interesting insofar as they affect the functionality that OSProcess provides. If you have a solid set of tests for OSProcess, and they all pass, who cares about the APIs?
From a more practical perspective, writing tests for OSProcess directly is simply easier. You can pin down the functionality you're after. (If you can't, why the heck are you writing it?) The environment that OSProcess expects to run in is much harder to specify. Should you, say, test that Dictionary implements #at:put:? Or is that assumed to be so universal in a Smalltalk implementation that it's not worth testing? Trying to specify exactly what OSProcess expects from its environment is an exercise in frustration. The only way to do it is to do a port, and see what breaks. This is what the Grease developers have done, and even limited to things that have proven to be portability issues it's a big task.
In summary, I think a better approach is to write lots of good tests for your package, and rely on them to tell you if the environment isn't what is needed.
I agree with this. If many people are writing tests for their respective codebases, then it is very likely that someone's test will notice breakages in the libraries that they rely on. Plus, every test will be in the context of a real use-case; as Colin notes, it's difficult to reliably anticipate which use-cases to write tests for, and to avoid wasting time on trivial and unnecessary tests.
Cheers, Josh
Colin
Colin Putney wrote:
On 2010-01-25, at 1:39 PM, David T. Lewis wrote:
I like Juan's idea a lot, but I lost some enthusiasm when I got to the part about it being a lot of work ;-)
Maybe by starting with the "quick-cheap" analysis that Nicolas suggests, it might be manageable.
I think that it would be important that the work be done in small chunks that can be contributed easily. We need to consider who is doing the work, and why they would be motivated to spend time on it. For example, the OSProcess package that I maintain (and I don't know if this is a good example) already has a large set of unit tests that fail right away if an expected interface changes. I would be willing to put some work into writing new tests that document just the api expectations alone, but I would not want to sink a large amount of time into it because it's likely to be boring work that does not provide much additional benefit to me.
I think you've hit the nail on the head here. Tests are indeed useful, but they work best when they test the functionality of interest. The base-level APIs are only interesting insofar as they affect the functionality that OSProcess provides. If you have a solid set of tests for OSProcess, and they all pass, who cares about the APIs?
From a more practical perspective, writing tests for OSProcess directly is simply easier. You can pin down the functionality you're after. (If you can't, why the heck are you writing it?) The environment that OSProcess expects to run in is much harder to specify. Should you, say, test that Dictionary implements #at:put:? Or is that assumed to be so universal in a Smalltalk implementation that it's not worth testing? Trying to specify exactly what OSProcess expects from its environment is an exercise in frustration. The only way to do it is to do a port, and see what breaks. This is what the Grease developers have done, and even limited to things that have proven to be portability issues it's a big task.
In summary, I think a better approach is to write lots of good tests for your package, and rely on them to tell you if the environment isn't what is needed.
Colin
You're right, but there's Grease. If other packages besides Seaside adopt it, it is a win-win.
Cheers, Juan Vuletich
On Jan 25, 2010, at 12:36 PM, Juan Vuletich wrote:
Hi Folks,
(reposted, in the hope of not being ignored)
Package developers want their work to run on various Squeak versions and variants, without needing rewrite. Same for app developers.
Base image builders want to be free of the need to provide backwards compatibility.
This is what I suggest: A package assumes it can use a set of apis of the Squeak (/Pharo/Cuis/Etoys/Tweak/Cobalt/etc) environment. Those assumptions should be made explicit, in the form of tests. So, for example, for collections, some package developer might require the "Common Collection API tests" to pass. Then, if his package fails to run, let's say in Cuis, he would run the tests for the apis he needs. If some test fails, he could say "Cuis developers, you're not supporting api XXX", end expect them to fix the issue. But if no test fails, he needs to either modify his code so it doesn't use not- standarized apis, or he could negotiate with (all) base image developers the addition of a new api or use case to the test suite and the base images.
Building these suites is quite some work, mostly to be done by package developers. But it can easily point out responsibilities and duties. It frees package developers of needing to have a deep knowledge of various base images. And it frees base image developers from needing to know details about an unbounded set of external packages. Besides, it puts popular packages that everybody wants to support on equal footing with less-known packages. It also lets base image developers say "we support Common APIs xxx, yyy, zzz, etc.".
All what I say about base images could also apply to packages that offer services to other packages: There could also be test suites to specify their services, and allow users to switch versions of the packages they use knowing what to expect.
What do you think?
The overall concept makes sense to me re: getting to a common set of APIs. It would be nice to have it in the form of formal protocols eventually but tests would provide a simpler starting point. This wouldn't just be helpful to image and package maintainers, but also to developers in general as API documentation is often lacking and an effort like this would (hopefully) encourage developers to better document what they produce.
Cheers, Juan Vuletich
Thanks, Phil
Hi Folks,
(reposted, in the hope of not being ignored)
Package developers want their work to run on various Squeak versions and variants, without needing rewrite. Same for app developers.
Base image builders want to be free of the need to provide backwards compatibility.
This is what I suggest: A package assumes it can use a set of apis of the Squeak (/Pharo/Cuis/Etoys/Tweak/Cobalt/etc) environment. Those assumptions should be made explicit, in the form of tests. So, for example, for collections, some package developer might require the "Common Collection API tests" to pass. Then, if his package fails to run, let's say in Cuis, he would run the tests for the apis he needs. If some test fails, he could say "Cuis developers, you're not supporting api XXX", end expect them to fix the issue. But if no test fails, he needs to either modify his code so it doesn't use not- standarized apis, or he could negotiate with (all) base image developers the addition of a new api or use case to the test suite and the base images.
Agreed wholeheartedly.
For this vision to have a chance, absolutely one thing is 100% essential. SUnit must be common, between forks, and there must be some way of flagging known exceptions for different target images. This is something I attempted to add to SUnit in August 2006, in eagre anticipation.
The second essential thing is for the package loading tools to also be in common. That means Monticello (in my book, though probably not in yours).
However, most forks imho are keeping all of their libraries too close to their chests.
All efforts to change this, to move obvious loadable libraries like SUnit, and MC out to be externally managed, have up to now failed. The weakness of my attempts so far has been in the testing side of things. (Matthew Fulmer is worth is weight in gold on that one)
However Monticello is a complicated beast, I may have made 400 more commits, merging 3 forks, but one or two bugs is all it takes to reject the entire refactoring of the repositories code, the improved more uniform ui implementation, the password manager, the dual change sorter, the orphanage for out of order loading, public package info properties for package managers, scripting of commits, memory analysis per package, the atomic loader, cleanUp code, improved version numbering, integrated Configurations, separated tests, default packageinfo package types etc etc etc.
I always needed others who are more rigourous to join in and help, but so far the vision hasn't caught.
I now think it is going to fall to the forks for whom the libraries are already genuinely optional, to pioneer this process. i.e. Cuis.
Building these suites is quite some work, mostly to be done by package developers.
As I said, if you try to treat what is perceived as an integral library as an external package, to be maintained by a package developer, with the API maintained by "actual conversation" between the fork-leaders and the package maintainers. The fork controllers wont have any of it, they forked for the purposes of retaining control, and wild horses wont shift them.
I tried, I asked, I begged, I cried, I explained, and I ranted, in the belief that it was now or never. Up until Pharo, all "forks" were basically differing applications on the same evolving kernel. With Pharo this is different, they are moving the Kernel in a different direction on purpose, however for some reason they believe that forking SUnit, an obviously loadable package, is necessary too!
Correct me if I am wrong, but in my thinking if SUnit is forked, your vision is pretty doomed.
SUnit is forked.
But it can easily point out responsibilities and duties. It frees package developers of needing to have a deep knowledge of various base images. And it frees base image developers from needing to know details about an unbounded set of external packages. Besides, it puts popular packages that everybody wants to support on equal footing with less-known packages. It also lets base image developers say "we support Common APIs xxx, yyy, zzz, etc.".
All what I say about base images could also apply to packages that offer services to other packages: There could also be test suites to specify their services, and allow users to switch versions of the packages they use knowing what to expect.
What do you think?
Like I say, I agree completely, however this is "planning", this is "thinking", this is showing "leadership", and defining a conceptual "process", with conceptual roles and responsibilities.
However, may I point out we don't need to do that here, we have a shared repository you can commit your changes to, its called "trunk".
Keith
keith wrote:
Hi Folks,
(reposted, in the hope of not being ignored)
Package developers want their work to run on various Squeak versions and variants, without needing rewrite. Same for app developers.
Base image builders want to be free of the need to provide backwards compatibility.
This is what I suggest: A package assumes it can use a set of apis of the Squeak (/Pharo/Cuis/Etoys/Tweak/Cobalt/etc) environment. Those assumptions should be made explicit, in the form of tests. So, for example, for collections, some package developer might require the "Common Collection API tests" to pass. Then, if his package fails to run, let's say in Cuis, he would run the tests for the apis he needs. If some test fails, he could say "Cuis developers, you're not supporting api XXX", end expect them to fix the issue. But if no test fails, he needs to either modify his code so it doesn't use not-standarized apis, or he could negotiate with (all) base image developers the addition of a new api or use case to the test suite and the base images.
Agreed wholeheartedly.
For this vision to have a chance, absolutely one thing is 100% essential. SUnit must be common, between forks, and there must be some way of flagging known exceptions for different target images. This is something I attempted to add to SUnit in August 2006, in eagre anticipation.
Why? All that is needed is to be able to run the same tests on all fork. That is asking a lot less than the SUnit package is exactly the same... Se Julian's recent message about Seaside and Grease. It even works across Smalltalk dialects.
The second essential thing is for the package loading tools to also be in common. That means Monticello (in my book, though probably not in yours).
Why? This has nothing to do with how code is loaded into each environment. Package developers might choose between ChangeSets, Monticello, or possibly other options.
However, most forks imho are keeping all of their libraries too close to their chests.
This initiative (actually Grease) allows each fork to do exactly that, while having guaranteed compatibility. It is the best of both worlds.
All efforts to change this, to move obvious loadable libraries like SUnit, and MC out to be externally managed, have up to now failed. The weakness of my attempts so far has been in the testing side of things. (Matthew Fulmer is worth is weight in gold on that one)
However Monticello is a complicated beast, I may have made 400 more commits, merging 3 forks, but one or two bugs is all it takes to reject the entire refactoring of the repositories code, the improved more uniform ui implementation, the password manager, the dual change sorter, the orphanage for out of order loading, public package info properties for package managers, scripting of commits, memory analysis per package, the atomic loader, cleanUp code, improved version numbering, integrated Configurations, separated tests, default packageinfo package types etc etc etc.
Those are package specific problems. I suggest getting in touch with Monticello developers to merge your changes.
...
Correct me if I am wrong, but in my thinking if SUnit is forked, your vision is pretty doomed.
As I said above, I see no reason for this.
...
Cheers, Juan Vuletich
All efforts to change this, to move obvious loadable libraries like SUnit, and MC out to be externally managed, have up to now failed. The weakness of my attempts so far has been in the testing side of things. (Matthew Fulmer is worth is weight in gold on that one)
However Monticello is a complicated beast, I may have made 400 more commits, merging 3 forks, but one or two bugs is all it takes to reject the entire refactoring of the repositories code, the improved more uniform ui implementation, the password manager, the dual change sorter, the orphanage for out of order loading, public package info properties for package managers, scripting of commits, memory analysis per package, the atomic loader, cleanUp code, improved version numbering, integrated Configurations, separated tests, default packageinfo package types etc etc etc.
Those are package specific problems. I suggest getting in touch with Monticello developers to merge your changes.
Matthew and I were the monticello maintainers for 3 years, after there had been none for at least a year. That was the whole point of setting up a shared repository squeaksource.com/mc so that Monticello could be maintained and worked on by anyone that knew how.
Most of us work with the latest of established packages on a day to day basis. Yet for some reason, both Pharo and "trunk" adopted the ancient version. There are no more bugs in the new version, the exisiting bugs are just in slightly different places. The new version passes lukas' "difficult test case", whereas the old one doesn't.
Keith
keith wrote:
Most of us work with the latest of established packages on a day to day basis. Yet for some reason, both Pharo and "trunk" adopted the ancient version.
Simple answer: The old version works and it had tons of mileage. When I asked for feedback on who's been using MC 1.5 and 1.6 I drew blanks from anyone but you and Matthew. When I then tried to see whether one of these versions could do everything that the current shipping version can do I ran into the issues described here:
http://lists.squeakfoundation.org/pipermail/squeak-dev/2009-October/140345.h...
The point being that if the new version can't deal with all the cases that the old version could, then it's probably not ready for adoption yet. If the issues listed above have been addressed since I'd be happy to repeat the experiment.
Cheers, - Andreas
On 27 Jan 2010, at 02:15, Andreas Raab wrote:
keith wrote:
Most of us work with the latest of established packages on a day to day basis. Yet for some reason, both Pharo and "trunk" adopted the ancient version.
Simple answer: The old version works and it had tons of mileage. When I asked for feedback on who's been using MC 1.5 and 1.6 I drew blanks from anyone but you and Matthew. When I then tried to see whether one of these versions could do everything that the current shipping version can do I ran into the issues described here:
http://lists.squeakfoundation.org/pipermail/squeak-dev/2009-October/140345.h...
The point being that if the new version can't deal with all the cases that the old version could, then it's probably not ready for adoption yet. If the issues listed above have been addressed since I'd be happy to repeat the experiment.
Cheers,
- Andreas
Hi Andreas,
MC1.5 has quite a few users out there, anyone who uses LPF, which included Randal for a start.
I would expect MC1.5 to be stable enough, this is the one with the atomic loading preference turned OFF.
The email you reference above is referring to MC1.6 (MCPackageLoader2) this is the experimental, atomic loading loader, which everyone knows isn't finished, no one ever claimed it was stable. We only ever claimed it would be really worth finishing and I had been asking for help with for more than 18 months, because it is not my area of expertise at all, and Matthew had got stuck afaik.
So the point being, if you test the wrong thing, you wont get the results you hoped for.
cheers
Keith
2010/1/27 keith keith_hodges@yahoo.co.uk:
On 27 Jan 2010, at 02:15, Andreas Raab wrote:
keith wrote:
Most of us work with the latest of established packages on a day to day basis. Yet for some reason, both Pharo and "trunk" adopted the ancient version.
Simple answer: The old version works and it had tons of mileage. When I asked for feedback on who's been using MC 1.5 and 1.6 I drew blanks from anyone but you and Matthew. When I then tried to see whether one of these versions could do everything that the current shipping version can do I ran into the issues described here:
http://lists.squeakfoundation.org/pipermail/squeak-dev/2009-October/140345.h...
The point being that if the new version can't deal with all the cases that the old version could, then it's probably not ready for adoption yet. If the issues listed above have been addressed since I'd be happy to repeat the experiment.
I tried to look at SystemEditor code a while ago. It would be cool firt, to make all its tests green. But i found some controversy in what test says and what it actually does.
I don't think that it would be possible to fix Traits support without author of SystemEditor. Only then we could move and try using it for atomic loading (in MC, DS or whatever).
Cheers, - Andreas
Hi Andreas, MC1.5 has quite a few users out there, anyone who uses LPF, which included Randal for a start. I would expect MC1.5 to be stable enough, this is the one with the atomic loading preference turned OFF. The email you reference above is referring to MC1.6 (MCPackageLoader2) this is the experimental, atomic loading loader, which everyone knows isn't finished, no one ever claimed it was stable. We only ever claimed it would be really worth finishing and I had been asking for help with for more than 18 months, because it is not my area of expertise at all, and Matthew had got stuck afaik. So the point being, if you test the wrong thing, you wont get the results you hoped for. cheers Keith
keith wrote:
MC1.5 has quite a few users out there, anyone who uses LPF, which included Randal for a start.
I would expect MC1.5 to be stable enough, this is the one with the atomic loading preference turned OFF.
The email you reference above is referring to MC1.6 (MCPackageLoader2) this is the experimental, atomic loading loader, which everyone knows isn't finished, no one ever claimed it was stable. We only ever claimed it would be really worth finishing and I had been asking for help with for more than 18 months, because it is not my area of expertise at all, and Matthew had got stuck afaik.
So the point being, if you test the wrong thing, you wont get the results you hoped for.
I tested what I what I was told by the best authority at hand. How does one load and test the "right" thing?
Cheers, - Andreas
On 27 Jan 2010, at 07:24, Andreas Raab wrote:
keith wrote:
MC1.5 has quite a few users out there, anyone who uses LPF, which included Randal for a start. I would expect MC1.5 to be stable enough, this is the one with the atomic loading preference turned OFF. The email you reference above is referring to MC1.6 (MCPackageLoader2) this is the experimental, atomic loading loader, which everyone knows isn't finished, no one ever claimed it was stable. We only ever claimed it would be really worth finishing and I had been asking for help with for more than 18 months, because it is not my area of expertise at all, and Matthew had got stuck afaik. So the point being, if you test the wrong thing, you wont get the results you hoped for.
I tested what I what I was told by the best authority at hand. How does one load and test the "right" thing?
Cheers,
- Andreas
It sounds like you loaded the right thing. You just need to turn the "useAtomicLoading" preference off, this uses MCPackageLoader1b, the non atomic loading code.
Keith
keith wrote:
It sounds like you loaded the right thing. You just need to turn the "useAtomicLoading" preference off, this uses MCPackageLoader1b, the non atomic loading code.
So I tried this too and it doesn't get very far either (same process described earlier). It fails very early on when it's trying to load a variant of ProgressInitiationException>>defaultMorphicAction. The failure originates from MCMethodDefinition>>preloadOver: which actually *removes* the selector that is being changed.
This is a fatal flaw since removing a method that's about to be modified can't possibly work when it comes to any system critical methods (compiler etc. comes to mind but even the progress display is enought to blow up). This version won't be able to deal with many, many changes that went into the trunk.
Cheers, - Andreas
At 11:16 PM -0800 1/27/10, Andreas Raab apparently wrote:
keith wrote:
It sounds like you loaded the right thing. You just need to turn the "useAtomicLoading" preference off, this uses MCPackageLoader1b, the non atomic loading code.
So I tried this too and it doesn't get very far either (same process described earlier). It fails very early on when it's trying to load a variant of ProgressInitiationException>>defaultMorphicAction. The failure originates from MCMethodDefinition>>preloadOver: which actually *removes* the selector that is being changed.
This is a fatal flaw since removing a method that's about to be modified can't possibly work when it comes to any system critical methods (compiler etc. comes to mind but even the progress display is enought to blow up). This version won't be able to deal with many, many changes that went into the trunk.
Cheers,
- Andreas
Andreas, I am finding it somewhat odd that you seem to be willing to hack your ancient trunk fork of MC any which way, but with the big potential wins of the MC1.5/1.6 versions, I'm thinking someone like yourself with the necessary detailed system knowledge, might be able to apply some appropriate fixes to bring MC into a better working state. Even if it might be an interim solution, it seems to me there could be some real gains. How far away from stable working versions can MC1.5/1.6 be? Both Matthew and Keith have used MC 1.5 and 1.6 fairly extensively, but perhaps not exactly for the types of things you have been doing. Maybe I don't fully appreciate the efforts required, but to my mind if the ancient version is able to do what you need, the newer versions must be close too. As Keith recently mentioned:
At 10:00 PM +0000 1/26/10, keith apparently wrote:
All efforts to change this, to move obvious loadable libraries like SUnit, and MC out to be externally managed, have up to now failed. The weakness of my attempts so far has been in the testing side of things. (Matthew Fulmer is worth is weight in gold on that one)
However Monticello is a complicated beast, I may have made 400 more commits, merging 3 forks, but one or two bugs is all it takes to reject the entire refactoring of the repositories code, the improved more uniform ui implementation, the password manager, the dual change sorter, the orphanage for out of order loading, public package info properties for package managers, scripting of commits, memory analysis per package, the atomic loader, cleanUp code, improved version numbering, integrated Configurations, separated tests, default packageinfo package types etc etc etc.
Those are package specific problems. I suggest getting in touch with Monticello developers to merge your changes.
Matthew and I were the monticello maintainers for 3 years, after there had been none for at least a year. That was the whole point of setting up a shared repository squeaksource.com/mc so that Monticello could be maintained and worked on by anyone that knew how.
Most of us work with the latest of established packages on a day to day basis. Yet for some reason, both Pharo and "trunk" adopted the ancient version. There are no more bugs in the new version, the exisiting bugs are just in slightly different places. The new version passes lukas' "difficult test case", whereas the old one doesn't.
Keith
Ken G. Brown
Hi Ken -
Ken G. Brown wrote:
I am finding it somewhat odd that you seem to be willing to hack your ancient trunk fork of MC any which way, but with the big potential wins of the MC1.5/1.6 versions, I'm thinking someone like yourself with the necessary detailed system knowledge, might be able to apply some appropriate fixes to bring MC into a better working state. Even if it might be an interim solution, it seems to me there could be some real gains.
I think you're misunderstanding my relationship with MC. I don't particularly like MC, I find the code base very hard to understand and very difficult to follow due to the complete lack of any comments. Self-documenting code my a** as one of my colleagues here would say.
The amount of MC that I have mastered is *just* the package loader and only because I had to in order to fix various broken operations. In the process of mastering it I've added comments to explain what I think happens but from some of the discussion it is obvious that I'm still missing various subtleties.
Looking at MC 1.5 I found nothing but more undocumented changes. The practical problem I've encountered makes no sense whatsoever to me (removing a changed method prior to compiling the new version cannot possibly work) but it's actually *different* from the prior version and there is no context to judge *why* it's different.
So if you're asking me why I'm not fixing Monticello one part of the answer is: I don't like MC, and unless I absolutely have to I'll stay away from it.
However, just as importantly, when we change a critical piece of the infrastructure I'm also looking for who will be providing support for any issues that come up. If something goes wrong in one of these areas, we need help and that help needs to come from the people who wrote that code. A great example in this regard were the recent changes to both CompiledMethodTrailer (Igor) and the >32MB changes file (David).
Both of these had issues come up that could have been the result of the changes they did. In both cases, Igor and David were *happy* to help and to dig into the details to make sure people get over these significant changes in the system. They didn't ask the users to fix the problem, they said "thanks for reporting the issue, could you try this or that, and here's the fix".
That's the reaction I'm looking for when someone wants to push changes to the core system. I'm *not* looking for them telling their testers to go fix it yourself (neither am I looking fpr people dumping their unfinished projects into my lap). That's the wrong attitude.
So if you want to get a newer version of MC into the trunk, the process is this: a) Fix the issues that have been reported so far b) Run the experiment described in http://lists.squeakfoundation.org/pipermail/squeak-dev/2009-October/140345.h... to ensure you can load everything in the trunk c) Report your success here
At the point where you've successfully re-run the updates in the trunk to ensure that your version of MC can deal with everything currently in use we're in a situation where a switch to MC 1.5 / 1.6 is a win-win situation.
Obviously, you don't have to do this all by yourself; if you can find help from people that works just as well. And if you add a few comments explaining what precisely the loader does and why that order and these operations are necessary, I might even give it a second look myself.
Cheers, - Andreas
Andreas, i read about your concerns about MC and i feel like MC currently is abandonware, since its not maintained by anyone, yet it is de-facto standard tool for sharing the code in squeak universe. This situation is really bad and we need to change it.
Also, i am strongly thinking that MC should be maintained as separate entity, because it is crucial for developers to have a tools which can be used to share the code among forks and therefore this tool should behave equally everywhere.
On 2 February 2010 08:25, Andreas Raab andreas.raab@gmx.de wrote:
Hi Ken -
Ken G. Brown wrote:
I am finding it somewhat odd that you seem to be willing to hack your ancient trunk fork of MC any which way, but with the big potential wins of the MC1.5/1.6 versions, I'm thinking someone like yourself with the necessary detailed system knowledge, might be able to apply some appropriate fixes to bring MC into a better working state. Even if it might be an interim solution, it seems to me there could be some real gains.
I think you're misunderstanding my relationship with MC. I don't particularly like MC, I find the code base very hard to understand and very difficult to follow due to the complete lack of any comments. Self-documenting code my a** as one of my colleagues here would say.
The amount of MC that I have mastered is *just* the package loader and only because I had to in order to fix various broken operations. In the process of mastering it I've added comments to explain what I think happens but from some of the discussion it is obvious that I'm still missing various subtleties.
Looking at MC 1.5 I found nothing but more undocumented changes. The practical problem I've encountered makes no sense whatsoever to me (removing a changed method prior to compiling the new version cannot possibly work) but it's actually *different* from the prior version and there is no context to judge *why* it's different.
So if you're asking me why I'm not fixing Monticello one part of the answer is: I don't like MC, and unless I absolutely have to I'll stay away from it.
However, just as importantly, when we change a critical piece of the infrastructure I'm also looking for who will be providing support for any issues that come up. If something goes wrong in one of these areas, we need help and that help needs to come from the people who wrote that code. A great example in this regard were the recent changes to both CompiledMethodTrailer (Igor) and the >32MB changes file (David).
Both of these had issues come up that could have been the result of the changes they did. In both cases, Igor and David were *happy* to help and to dig into the details to make sure people get over these significant changes in the system. They didn't ask the users to fix the problem, they said "thanks for reporting the issue, could you try this or that, and here's the fix".
That's the reaction I'm looking for when someone wants to push changes to the core system. I'm *not* looking for them telling their testers to go fix it yourself (neither am I looking fpr people dumping their unfinished projects into my lap). That's the wrong attitude.
So if you want to get a newer version of MC into the trunk, the process is this: a) Fix the issues that have been reported so far b) Run the experiment described in http://lists.squeakfoundation.org/pipermail/squeak-dev/2009-October/140345.h... to ensure you can load everything in the trunk c) Report your success here
At the point where you've successfully re-run the updates in the trunk to ensure that your version of MC can deal with everything currently in use we're in a situation where a switch to MC 1.5 / 1.6 is a win-win situation.
Obviously, you don't have to do this all by yourself; if you can find help from people that works just as well. And if you add a few comments explaining what precisely the loader does and why that order and these operations are necessary, I might even give it a second look myself.
Cheers, - Andreas
Igor Stasenko wrote:
Andreas, i read about your concerns about MC and i feel like MC currently is abandonware, since its not maintained by anyone, yet it is de-facto standard tool for sharing the code in squeak universe. This situation is really bad and we need to change it.
So what are you proposing to change it?
Cheers, - Andreas
Also, i am strongly thinking that MC should be maintained as separate entity, because it is crucial for developers to have a tools which can be used to share the code among forks and therefore this tool should behave equally everywhere.
On 2 February 2010 08:25, Andreas Raab andreas.raab@gmx.de wrote:
Hi Ken -
Ken G. Brown wrote:
I am finding it somewhat odd that you seem to be willing to hack your ancient trunk fork of MC any which way, but with the big potential wins of the MC1.5/1.6 versions, I'm thinking someone like yourself with the necessary detailed system knowledge, might be able to apply some appropriate fixes to bring MC into a better working state. Even if it might be an interim solution, it seems to me there could be some real gains.
I think you're misunderstanding my relationship with MC. I don't particularly like MC, I find the code base very hard to understand and very difficult to follow due to the complete lack of any comments. Self-documenting code my a** as one of my colleagues here would say.
The amount of MC that I have mastered is *just* the package loader and only because I had to in order to fix various broken operations. In the process of mastering it I've added comments to explain what I think happens but from some of the discussion it is obvious that I'm still missing various subtleties.
Looking at MC 1.5 I found nothing but more undocumented changes. The practical problem I've encountered makes no sense whatsoever to me (removing a changed method prior to compiling the new version cannot possibly work) but it's actually *different* from the prior version and there is no context to judge *why* it's different.
So if you're asking me why I'm not fixing Monticello one part of the answer is: I don't like MC, and unless I absolutely have to I'll stay away from it.
However, just as importantly, when we change a critical piece of the infrastructure I'm also looking for who will be providing support for any issues that come up. If something goes wrong in one of these areas, we need help and that help needs to come from the people who wrote that code. A great example in this regard were the recent changes to both CompiledMethodTrailer (Igor) and the >32MB changes file (David).
Both of these had issues come up that could have been the result of the changes they did. In both cases, Igor and David were *happy* to help and to dig into the details to make sure people get over these significant changes in the system. They didn't ask the users to fix the problem, they said "thanks for reporting the issue, could you try this or that, and here's the fix".
That's the reaction I'm looking for when someone wants to push changes to the core system. I'm *not* looking for them telling their testers to go fix it yourself (neither am I looking fpr people dumping their unfinished projects into my lap). That's the wrong attitude.
So if you want to get a newer version of MC into the trunk, the process is this: a) Fix the issues that have been reported so far b) Run the experiment described in http://lists.squeakfoundation.org/pipermail/squeak-dev/2009-October/140345.h... to ensure you can load everything in the trunk c) Report your success here
At the point where you've successfully re-run the updates in the trunk to ensure that your version of MC can deal with everything currently in use we're in a situation where a switch to MC 1.5 / 1.6 is a win-win situation.
Obviously, you don't have to do this all by yourself; if you can find help from people that works just as well. And if you add a few comments explaining what precisely the loader does and why that order and these operations are necessary, I might even give it a second look myself.
Cheers,
- Andreas
On 2 February 2010 10:23, Andreas Raab andreas.raab@gmx.de wrote:
Igor Stasenko wrote:
Andreas, i read about your concerns about MC and i feel like MC currently is abandonware, since its not maintained by anyone, yet it is de-facto standard tool for sharing the code in squeak universe. This situation is really bad and we need to change it.
So what are you proposing to change it?
I thought that i already said that: we need a maintainable separate MC repository. If you remember, i raised this topic before, still no people found, who would take care of that.
Cheers, - Andreas
On 2010-02-01, at 11:56 PM, Igor Stasenko wrote:
Andreas, i read about your concerns about MC and i feel like MC currently is abandonware, since its not maintained by anyone, yet it is de-facto standard tool for sharing the code in squeak universe. This situation is really bad and we need to change it.
Ok, I volunteer.
Igor's right - it's obvious that MC needs an active maintainer. I've been reluctant to step into this role because I'd rather devote my Squeaking time to MC2. So this will delay the release of MC 2.1, but may make the migration path smoother. I'll post more once I've had a chance to look at the state of the art and figure out a road map.
Colin
On 02.02.2010, at 20:19, Colin Putney wrote:
On 2010-02-01, at 11:56 PM, Igor Stasenko wrote:
Andreas, i read about your concerns about MC and i feel like MC currently is abandonware, since its not maintained by anyone, yet it is de-facto standard tool for sharing the code in squeak universe. This situation is really bad and we need to change it.
Ok, I volunteer.
Igor's right - it's obvious that MC needs an active maintainer. I've been reluctant to step into this role because I'd rather devote my Squeaking time to MC2. So this will delay the release of MC 2.1, but may make the migration path smoother. I'll post more once I've had a chance to look at the state of the art and figure out a road map.
Colin
Yay!
I don't really have time to work on it, but having written my share of MC code, if you want to discuss anything you're welcome :) I actually found MC quite understandable, even without many comments ...
- Bert -
Bert Freudenberg wrote:
On 02.02.2010, at 20:19, Colin Putney wrote:
On 2010-02-01, at 11:56 PM, Igor Stasenko wrote:
Andreas, i read about your concerns about MC and i feel like MC currently is abandonware, since its not maintained by anyone, yet it is de-facto standard tool for sharing the code in squeak universe. This situation is really bad and we need to change it.
Ok, I volunteer.
Igor's right - it's obvious that MC needs an active maintainer. I've been reluctant to step into this role because I'd rather devote my Squeaking time to MC2. So this will delay the release of MC 2.1, but may make the migration path smoother. I'll post more once I've had a chance to look at the state of the art and figure out a road map.
Colin
Yay!
Indeed!
I don't really have time to work on it, but having written my share of MC code, if you want to discuss anything you're welcome :) I actually found MC quite understandable, even without many comments ...
That's because you're smarter than me :-) But I'll use that opportunity to learn more about MC and help writing meaningful comments.
Cheers, - Andreas
Thank you Colin! Very much!
On 3 February 2010 08:12, Andreas Raab andreas.raab@gmx.de wrote:
Bert Freudenberg wrote:
On 02.02.2010, at 20:19, Colin Putney wrote:
On 2010-02-01, at 11:56 PM, Igor Stasenko wrote:
Andreas, i read about your concerns about MC and i feel like MC currently is abandonware, since its not maintained by anyone, yet it is de-facto standard tool for sharing the code in squeak universe. This situation is really bad and we need to change it.
Ok, I volunteer.
Igor's right - it's obvious that MC needs an active maintainer. I've been reluctant to step into this role because I'd rather devote my Squeaking time to MC2. So this will delay the release of MC 2.1, but may make the migration path smoother. I'll post more once I've had a chance to look at the state of the art and figure out a road map.
Colin
Yay!
Indeed!
I don't really have time to work on it, but having written my share of MC code, if you want to discuss anything you're welcome :) I actually found MC quite understandable, even without many comments ...
That's because you're smarter than me :-) But I'll use that opportunity to learn more about MC and help writing meaningful comments.
Cheers, - Andreas
At 8:19 PM -0800 2/2/10, Colin Putney apparently wrote:
On 2010-02-01, at 11:56 PM, Igor Stasenko wrote:
Andreas, i read about your concerns about MC and i feel like MC currently is abandonware, since its not maintained by anyone, yet it is de-facto standard tool for sharing the code in squeak universe. This situation is really bad and we need to change it.
Ok, I volunteer.
Igor's right - it's obvious that MC needs an active maintainer. I've been reluctant to step into this role because I'd rather devote my Squeaking time to MC2. So this will delay the release of MC 2.1, but may make the migration path smoother. I'll post more once I've had a chance to look at the state of the art and figure out a road map.
Colin
Matthew Fulmer's 'place for me to keep the scattered info on monticello versions 1.5 and 1.6 until it has a proper home and is released'
http://installer.pbworks.com/Monticello15
Ken G. Brown
Hi Colin -
In your newly found role as MC maintainer, I have a question for you: I am wondering about the role of errorDefinitions in MCPackageLoader. What errors are they supposed to capture? The only one I am aware of is when a variable gets moved up or down inside a package. Are there others?
(the reason I'm asking is that I fixed this case today with help from Eliot, so we may be able to remove this unnecessary complexity)
Cheers, - Andreas
On 2010-02-12, at 11:17 PM, Andreas Raab wrote:
Hi Colin -
In your newly found role as MC maintainer, I have a question for you: I am wondering about the role of errorDefinitions in MCPackageLoader. What errors are they supposed to capture? The only one I am aware of is when a variable gets moved up or down inside a package. Are there others?
(the reason I'm asking is that I fixed this case today with help from Eliot, so we may be able to remove this unnecessary complexity)
I don't think this was meant a workaround to a known problem. It's just a way to gracefully handle unexpected errors while loading a package.
I'm setting up a new repository for MC1 development, where we can collect the history relevant to the current trunk version, and commit fixes and further development. I figure the best way to distribute is just to push to the trunk repository, but we'll at least have well-defined releases. Is the fix you and Eliot made available somewhere?
Colin
At 8:22 PM -0800 2/14/10, Colin Putney apparently wrote:
On 2010-02-12, at 11:17 PM, Andreas Raab wrote:
Hi Colin -
In your newly found role as MC maintainer, I have a question for you: I am wondering about the role of errorDefinitions in MCPackageLoader. What errors are they supposed to capture? The only one I am aware of is when a variable gets moved up or down inside a package. Are there others?
(the reason I'm asking is that I fixed this case today with help from Eliot, so we may be able to remove this unnecessary complexity)
I don't think this was meant a workaround to a known problem. It's just a way to gracefully handle unexpected errors while loading a package.
I'm setting up a new repository for MC1 development, where we can collect the history relevant to the current trunk version, and commit fixes and further development. I figure the best way to distribute is just to push to the trunk repository, but we'll at least have well-defined releases. Is the fix you and Eliot made available somewhere?
Colin
There already is an established repo at http://www.squeaksource.com/mc.html which gathered together all the previous state of the art of MC before trunk and Pharo continued their divergent MC paths on the ancient version.
See also: Matthew Fulmer's 'place for me to keep the scattered info on monticello versions 1.5 and 1.6 until it has a proper home and is released'
http://installer.pbworks.com/Monticello15
Ken G. Brown
On 2010-02-14, at 11:12 PM, Ken G. Brown wrote:
There already is an established repo at http://www.squeaksource.com/mc.html which gathered together all the previous state of the art of MC before trunk and Pharo continued their divergent MC paths on the ancient version.
See also: Matthew Fulmer's 'place for me to keep the scattered info on monticello versions 1.5 and 1.6 until it has a proper home and is released'
Yeah, I'll have to dig through that stuff to see what features should be pulled back into the main line of development. I prefer to use my own repository for putting together releases, though.
Colin
Colin Putney wrote:
On 2010-02-12, at 11:17 PM, Andreas Raab wrote:
Hi Colin -
In your newly found role as MC maintainer, I have a question for you: I am wondering about the role of errorDefinitions in MCPackageLoader. What errors are they supposed to capture? The only one I am aware of is when a variable gets moved up or down inside a package. Are there others?
(the reason I'm asking is that I fixed this case today with help from Eliot, so we may be able to remove this unnecessary complexity)
I don't think this was meant a workaround to a known problem. It's just a way to gracefully handle unexpected errors while loading a package.
I'm setting up a new repository for MC1 development, where we can collect the history relevant to the current trunk version, and commit fixes and further development. I figure the best way to distribute is just to push to the trunk repository, but we'll at least have well-defined releases. Is the fix you and Eliot made available somewhere?
I pushed into trunk yesterday. The relevant MC part is very simple: Just a handler for DuplicateVariableError in MCPackageLoader>>basicLoad that resumes the exception.
Cheers, - Andreas
Ok, well comments like this must mean why my hindbrain is looking at Ginsu again.
On 2010-02-01, at 10:25 PM, Andreas Raab wrote:
Hi Ken -
Ken G. Brown wrote:
I am finding it somewhat odd that you seem to be willing to hack your ancient trunk fork of MC any which way, but with the big potential wins of the MC1.5/1.6 versions, I'm thinking someone like yourself with the necessary detailed system knowledge, might be able to apply some appropriate fixes to bring MC into a better working state. Even if it might be an interim solution, it seems to me there could be some real gains.
I think you're misunderstanding my relationship with MC. I don't particularly like MC, I find the code base very hard to understand and very difficult to follow due to the complete lack of any comments. Self-documenting code my a** as one of my colleagues here would say.
-- =========================================================================== John M. McIntosh johnmci@smalltalkconsulting.com Twitter: squeaker68882 Corporate Smalltalk Consulting Ltd. http://www.smalltalkconsulting.com ===========================================================================
On 28 Jan 2010, at 07:16, Andreas Raab wrote:
keith wrote:
It sounds like you loaded the right thing. You just need to turn the "useAtomicLoading" preference off, this uses MCPackageLoader1b, the non atomic loading code.
So I tried this too and it doesn't get very far either (same process described earlier). It fails very early on when it's trying to load a variant of ProgressInitiationException>>defaultMorphicAction. The failure originates from MCMethodDefinition>>preloadOver: which actually *removes* the selector that is being changed.
This is a fatal flaw since removing a method that's about to be modified can't possibly work when it comes to any system critical methods (compiler etc. comes to mind but even the progress display is enought to blow up). This version won't be able to deal with many, many changes that went into the trunk.
Cheers,
- Andreas
Sigh,
Like I said, different bugs in different places.
The 1b loader tries to pre-empt problems, whereas the original loader just waits for the problems to occur, and catches the exceptions. It turns out that the 1b approach is a lot harder.
If I recall, and my memory is failing, methods are removed, in order to accomodate lukas' difficult load scenario, where the change in the instance vars breaks things.
We did think of returning to the old loader approach at one point, but figured it would be better to get AtomicLoading to work. So now we are caught in the middle.
Keith
On Thu, Jan 28, 2010 at 12:25 PM, keith keith_hodges@yahoo.co.uk wrote:
On 28 Jan 2010, at 07:16, Andreas Raab wrote:
keith wrote:
It sounds like you loaded the right thing. You just need to turn the "useAtomicLoading" preference off, this uses MCPackageLoader1b, the non atomic loading code.
So I tried this too and it doesn't get very far either (same process described earlier). It fails very early on when it's trying to load a variant of ProgressInitiationException>>defaultMorphicAction. The failure originates from MCMethodDefinition>>preloadOver: which actually *removes* the selector that is being changed.
This is a fatal flaw since removing a method that's about to be modified can't possibly work when it comes to any system critical methods (compiler etc. comes to mind but even the progress display is enought to blow up). This version won't be able to deal with many, many changes that went into the trunk.
Cheers,
- Andreas
Sigh,
Like I said, different bugs in different places.
The 1b loader tries to pre-empt problems, whereas the original loader just waits for the problems to occur, and catches the exceptions. It turns out that the 1b approach is a lot harder.
If I recall, and my memory is failing, methods are removed, in order to accomodate lukas' difficult load scenario, where the change in the instance vars breaks things.
What is this scenario? Does it break the following load ordering?
- add all new inst and class vars to existing classes (avoiding deleting any that need to be deleted until later) - add all new classes - add all new methods - change all to be changed methods - delete all to be deleted methods - remove all inst and class vars from existing classes (deferred from step 1)
We did think of returning to the old loader approach at one point, but figured it would be better to get AtomicLoading to work. So now we are caught in the middle.
Keith
What is this scenario? Does it break the following load ordering?
- add all new inst and class vars to existing classes (avoiding
deleting any that need to be deleted until later)
- add all new classes
The original innovation of the 1b Loader was to begin by defining classes using a union of the old and new instVars list , in an attempt to keep as much working as possible. So yes that implements your suggestion above.
The old loader would just load the new class definition, and if that compilation failed, try again later in the hope that any failing methods were removed in the remove method step.
- add all new methods
- change all to be changed methods
- delete all to be deleted methods
- remove all inst and class vars from existing classes (deferred
from step 1)
So basically your scheme is what the 1b loader is originally was designed to do.
However it looks like an extra "remove method" step has crept in at some later point in time, probably to fix one of the other conflicting bug scenarios, and this is what is causing the problem.
What would have been most useful, (and I didn't know how to achieve this) would be a simple hook to enable new method to be compiled against the fictional not yet installed future class definition, but then installed in the old class, just before committing the new class definition. This would not be full atomic loading, but it would yield a much less quirky standard loader.
Keith
On Thu, Jan 28, 2010 at 5:42 PM, keith keith_hodges@yahoo.co.uk wrote:
What is this scenario? Does it break the following load ordering?
- add all new inst and class vars to existing classes (avoiding deleting
any that need to be deleted until later)
- add all new classes
The original innovation of the 1b Loader was to begin by defining classes using a union of the old and new instVars list , in an attempt to keep as much working as possible. So yes that implements your suggestion above.
The old loader would just load the new class definition, and if that compilation failed, try again later in the hope that any failing methods were removed in the remove method step.
That's broken. It should compute the correct definition and not catch compilation errors. A loader that hopes is hopeless ;)
- add all new methods
- change all to be changed methods
- delete all to be deleted methods
- remove all inst and class vars from existing classes (deferred from step
So basically your scheme is what the 1b loader is originally was designed to do.
However it looks like an extra "remove method" step has crept in at some later point in time, probably to fix one of the other conflicting bug scenarios, and this is what is causing the problem.
I know it's painful but good tests here are hugely valuable (albeit rather tedious to write).
What would have been most useful, (and I didn't know how to achieve this) would be a simple hook to enable new method to be compiled against the fictional not yet installed future class definition, but then installed in the old class, just before committing the new class definition. This would not be full atomic loading, but it would yield a much less quirky standard loader.
I don't think so. What is useful is a well-defined readable implementation of the above load order. Bells and whistles and convoluted code are what's _not_ needed.
Keith
On 29 January 2010 19:05, Eliot Miranda eliot.miranda@gmail.com wrote:
On Thu, Jan 28, 2010 at 5:42 PM, keith keith_hodges@yahoo.co.uk wrote:
What is this scenario? Does it break the following load ordering?
- add all new inst and class vars to existing classes (avoiding deleting
any that need to be deleted until later)
- add all new classes
The original innovation of the 1b Loader was to begin by defining classes using a union of the old and new instVars list , in an attempt to keep as much working as possible. So yes that implements your suggestion above.
The old loader would just load the new class definition, and if that compilation failed, try again later in the hope that any failing methods were removed in the remove method step.
That's broken. It should compute the correct definition and not catch compilation errors. A loader that hopes is hopeless ;)
- add all new methods
- change all to be changed methods
- delete all to be deleted methods
- remove all inst and class vars from existing classes (deferred from
step 1)
So basically your scheme is what the 1b loader is originally was designed to do.
However it looks like an extra "remove method" step has crept in at some later point in time, probably to fix one of the other conflicting bug scenarios, and this is what is causing the problem.
I know it's painful but good tests here are hugely valuable (albeit rather tedious to write).
What would have been most useful, (and I didn't know how to achieve this) would be a simple hook to enable new method to be compiled against the fictional not yet installed future class definition, but then installed in the old class, just before committing the new class definition. This would not be full atomic loading, but it would yield a much less quirky standard loader.
I don't think so. What is useful is a well-defined readable implementation of the above load order. Bells and whistles and convoluted code are what's _not_ needed.
+1.
The loading logic should be clearly defined and straightforward.
Keith
Igor Stasenko wrote:
I don't think so. What is useful is a well-defined readable implementation of the above load order. Bells and whistles and convoluted code are what's _not_ needed.
+1.
The loading logic should be clearly defined and straightforward.
And commented. I must be the only person in the world who thinks that this loading stuff isn't trivial and deserves a bit of an outline as to what exactly happens in which order and why.
Cheers, - Andreas
However it looks like an extra "remove method" step has crept in at some later point in time, probably to fix one of the other conflicting bug scenarios, and this is what is causing the problem.
I know it's painful but good tests here are hugely valuable (albeit rather tedious to write).
Totally agree, unfortunately I didn't understand the tests at the time.
Keith
On Fri, Jan 29, 2010 at 2:42 AM, keith keith_hodges@yahoo.co.uk wrote:
What is this scenario? Does it break the following load ordering?
- add all new inst and class vars to existing classes (avoiding deleting
any that need to be deleted until later)
- add all new classes
The original innovation of the 1b Loader was to begin by defining classes using a union of the old and new instVars list , in an attempt to keep as much working as possible. So yes that implements your suggestion above.
The old loader would just load the new class definition, and if that compilation failed, try again later in the hope that any failing methods were removed in the remove method step.
- add all new methods
- change all to be changed methods
- delete all to be deleted methods
- remove all inst and class vars from existing classes (deferred from step
So basically your scheme is what the 1b loader is originally was designed to do.
However it looks like an extra "remove method" step has crept in at some later point in time, probably to fix one of the other conflicting bug scenarios, and this is what is causing the problem.
What would have been most useful, (and I didn't know how to achieve this) would be a simple hook to enable new method to be compiled against the fictional not yet installed future class definition, but then installed in the old class, just before committing the new class definition. This would not be full atomic loading, but it would yield a much less quirky standard loader.
Doesn't the FileContentsBrowser do this using PseudoClass etc?
Comment says: I use PseudoClass, PseudoClassOrganizers, and PseudoMetaclass to model the class structure of the source file.
Karl
Why? All that is needed is to be able to run the same tests on all fork.
So when magma - written in squeak, requires one variant with complex facilities such as remote invocation of images, and Seaside written in pharo requires another. The integrator who wishes to test both in one image may find irreconcilable differences. Not all testing code uses the lowest common denominator.
So what will happen is the multiple variants of SUnit will exist in a creative tension, to the extent that evolving any of them will become virtually impossible.
A trivial example, I prefer that shouldInheritSelectors be specified explicitly, most implementations set it automatically for Abstract classes. An "improvement" as simple as this will never happen. Another trivial example, there are no users of LongTestCase in the squeak image, having a general test categorisation mechanism will provide the same facility. Write one test case that required LongTestCase, and you force me to remain compatible.
What is so wrong with treating SUnit as a loadable package with maintainers and conversations to discuss its future, so that it may actually evolve. You seem to think it is a bad thing.
Keith
p.s. I think Cuis will be great for squeak, because...
1. as long as it loads in Cuis, it will load in most places. 2. The Cuis version are like to be simpler than others
keith wrote:
Why? All that is needed is to be able to run the same tests on all fork.
So when magma - written in squeak, requires one variant with complex facilities such as remote invocation of images, and Seaside written in pharo requires another. The integrator who wishes to test both in one image may find irreconcilable differences. Not all testing code uses the lowest common denominator.
I see. So, there are actually several versions of SUnit maintained as external packages by different teams? Didn't know about that... If those external packages already exist and have maintainers, I have nothing against that.
So what will happen is the multiple variants of SUnit will exist in a creative tension, to the extent that evolving any of them will become virtually impossible.
A trivial example, I prefer that shouldInheritSelectors be specified explicitly, most implementations set it automatically for Abstract classes. An "improvement" as simple as this will never happen. Another trivial example, there are no users of LongTestCase in the squeak image, having a general test categorisation mechanism will provide the same facility. Write one test case that required LongTestCase, and you force me to remain compatible.
Hey, I'll never force you to do anything at all.
What is so wrong with treating SUnit as a loadable package with maintainers and conversations to discuss its future, so that it may actually evolve. You seem to think it is a bad thing.
Not at all. I just didn't know those packages and their teams actually exist.
Keith
p.s. I think Cuis will be great for squeak, because...
- as long as it loads in Cuis, it will load in most places.
- The Cuis version are like to be simpler than others
I'm not that sure about any of those, but you might be right. Just please keep in mind that at the "Cuis Manifesto" or whatever it should be called I say: "This means that there are no guarantees of compatibility between Cuis and anything else, including the various releases and derivatives of Squeak, or even other releases of Cuis itself."
Cheers, Juan Vuletich
Josh wrote:
I don't understand why this is true. In another post you've described the comprehensive build/test framework that you currently use, and I observed that I cant see any reason why you can't run it against the bi-monthly trunk image releases (or for that matter, against nightly trunk images that you automatically update).
The irony being that I cant use my build system to build my own images, because they are pier based, and pier keeps its data in image.
Keith
Sure and you then object file out the pier data like I do for wikiserver on the iPhone, kill the VM, start the VM with the read-only image, read the Pier data back in.
http://www.mobilewikiserver.com/Wiki_ImportExport.html
On 2010-01-25, at 8:25 AM, keith wrote:
Josh wrote:
I don't understand why this is true. In another post you've described the comprehensive build/test framework that you currently use, and I observed that I cant see any reason why you can't run it against the bi-monthly trunk image releases (or for that matter, against nightly trunk images that you automatically update).
The irony being that I cant use my build system to build my own images, because they are pier based, and pier keeps its data in image.
Keith
-- =========================================================================== John M. McIntosh johnmci@smalltalkconsulting.com Twitter: squeaker68882 Corporate Smalltalk Consulting Ltd. http://www.smalltalkconsulting.com ===========================================================================
"Nicolas" == Nicolas Cellier nicolas.cellier.aka.nice@gmail.com writes:
Nicolas> Since I don't see much requests in this list, shall I conclude that Nicolas> either Squeak-trunk is not used for application dev. or that there is Nicolas> no major compatibility problem ?
Why is this viewed so much different from every other open source project?
Do you keep installing the daily release of the Linux Kernel on your server machines? No!
You wait for a major release of Fedora, or Debian, or Ubuntu, which happens every six months or year.
You port your code to the new release, with hopefully only a few problems.
You then deploy *that* release with your code.
Squeak is following a similar model... if you have production code, keep using 3.9 or 3.10 or 3.10.2 until the next *Squeak* release, which will be 4.0 within a few weeks (same as 3.10.2 precisely) and then 4.1 shortly after (which will be a frozen and vetted version of trunk), and then 4.2 maybe a year later.
If you are a *developer* of Squeak itself, follow trunk, propose changes to Inbox, and perhaps become a committer to trunk yourself after enough good patches.
THIS IS THE SAME MODEL AS EVERY OTHER SUCCESSFUL OPEN SOURCE PROJECT.
Keith continues to make it into something it isn't. We're doing *what works*.
If you are a *developer* of Squeak itself, follow trunk, propose changes to Inbox, and perhaps become a committer to trunk yourself after enough good patches.
THIS IS THE SAME MODEL AS EVERY OTHER SUCCESSFUL OPEN SOURCE PROJECT.
Do you recall that in the last 10 years there was discovered such a thing as Extreme Programming. This is a different model to what you are using.
Keith continues to make it into something it isn't. We're doing *what works*.
May I point out again that it doesn't work for me. My patches to trunk would be fixes for Monticello. These fixes would break trunk, therefore it doesn't work as you suppose.
Keith
On 24 Jan 2010, at 15:28, keith wrote:
If you are a *developer* of Squeak itself, follow trunk, propose changes to Inbox, and perhaps become a committer to trunk yourself after enough good patches.
THIS IS THE SAME MODEL AS EVERY OTHER SUCCESSFUL OPEN SOURCE PROJECT.
Do you recall that in the last 10 years there was discovered such a thing as Extreme Programming. This is a different model to what you are using.
Keith continues to make it into something it isn't. We're doing *what works*.
May I point out again that it doesn't work for me. My patches to trunk would be fixes for Monticello. These fixes would break trunk, therefore it doesn't work as you suppose.
Keith
Also squeak and linux are different, one is source based the other isn't.
Squeak has a long history, and the problem is not, and never has been, bunging out another release, Edgar managed it all on his lonesome!
The new problem on the block is reducing the propensity, the need, and the drive, for forking within the community.
how exactly does trunk solve that. If you can answer that convincingly I will then be convinced that trunk is part of the solution and not part of the problem.
regards
Keith
Randal wrote:
If you are a *developer* of Squeak itself, follow trunk, propose changes to Inbox, and perhaps become a committer to trunk yourself after enough good patches.
THIS IS THE SAME MODEL AS EVERY OTHER SUCCESSFUL OPEN SOURCE PROJECT.
I suggest you read the documentation for Bazaar and Mercurial.
Quoting from: http://doc.bazaar.canonical.com/latest/en/user-guide/organizing_branches.htm... Each new feature or fix is developed in its own branch. These branches are referred to as feature branches or task branches - the terms are used interchangeably.
To create a task branch, use the branch command against your mirror branch. For example:
bzr branch trunk fix-123 cd fix-123 (hack, hack, hack) There are numerous advantages to this approach:
You can work on multiple changes in parallel There is reduced coupling between changes Multiple people can work in a peer-to-peer mode on a branch until it is ready to go. In particular, some changes take longer to cook than others so you can ask for reviews, apply feedback, ask for another review, etc. By completing work to sufficient quality in separate branches before merging into a central branch, the quality and stability of the central branch are maintained at higher level than they otherwise would be.
regards
Keith
2010/1/24 keith keith_hodges@yahoo.co.uk:
If you are a *developer* of Squeak itself, follow trunk, propose changes to Inbox, and perhaps become a committer to trunk yourself after enough good patches.
THIS IS THE SAME MODEL AS EVERY OTHER SUCCESSFUL OPEN SOURCE PROJECT.
Do you recall that in the last 10 years there was discovered such a thing as Extreme Programming. This is a different model to what you are using.
Keith continues to make it into something it isn't. We're doing *what works*.
May I point out again that it doesn't work for me. My patches to trunk would be fixes for Monticello. These fixes would break trunk, therefore it doesn't work as you suppose.
Strange, then , why calling it fix - fixes usually not breaking existing stuff. Call it replacement or rewrite, or upgrade, but not 'fix'.
Keith
May I point out again that it doesn't work for me. My patches to trunk would be fixes for Monticello. These fixes would break trunk, therefore it doesn't work as you suppose.
Strange, then , why calling it fix - fixes usually not breaking existing stuff. Call it replacement or rewrite, or upgrade, but not 'fix'.
I think AGAIN you are missing the point completely
Monticello cant load itself for anything other than the most minor of changes.
Keith
2010/1/24 keith keith_hodges@yahoo.co.uk:
May I point out again that it doesn't work for me. My patches to trunk would be fixes for Monticello. These fixes would break trunk, therefore it doesn't work as you suppose.
Strange, then , why calling it fix - fixes usually not breaking existing stuff. Call it replacement or rewrite, or upgrade, but not 'fix'.
I think AGAIN you are missing the point completely
Monticello cant load itself for anything other than the most minor of changes.
See my other reply. MC can load code, which can do anything with MC.
Keith
On 24 Jan 2010, at 18:55, Igor Stasenko wrote:
2010/1/24 keith keith_hodges@yahoo.co.uk:
May I point out again that it doesn't work for me. My patches to trunk would be fixes for Monticello. These fixes would break trunk, therefore it doesn't work as you suppose.
Strange, then , why calling it fix - fixes usually not breaking existing stuff. Call it replacement or rewrite, or upgrade, but not 'fix'.
I think AGAIN you are missing the point completely
Monticello cant load itself for anything other than the most minor of changes.
See my other reply. MC can load code, which can do anything with MC.
I look forward to seeing trunk with MC1.5 then, cool
Keith
2010/1/24 keith keith_hodges@yahoo.co.uk:
On 24 Jan 2010, at 18:55, Igor Stasenko wrote:
2010/1/24 keith keith_hodges@yahoo.co.uk:
May I point out again that it doesn't work for me. My patches to trunk
would
be fixes for Monticello. These fixes would break trunk, therefore it
doesn't
work as you suppose.
Strange, then , why calling it fix - fixes usually not breaking existing
stuff.
Call it replacement or rewrite, or upgrade, but not 'fix'.
I think AGAIN you are missing the point completely
Monticello cant load itself for anything other than the most minor of
changes.
See my other reply. MC can load code, which can do anything with MC.
I look forward to seeing trunk with MC1.5 then, cool
if you remember, i already raised this topic here and in Pharo list. The problem is, that (as you pointing in another thread about capturing knowledge), there's no one left, having such knowledge about MC 1.5., to do such integration.
Keith
Igor Stasenko wrote:
2010/1/24 keith keith_hodges@yahoo.co.uk:
I look forward to seeing trunk with MC1.5 then, cool
if you remember, i already raised this topic here and in Pharo list. The problem is, that (as you pointing in another thread about capturing knowledge), there's no one left, having such knowledge about MC 1.5., to do such integration.
We did the experiment earlier:
http://lists.squeakfoundation.org/pipermail/squeak-dev/2009-October/140345.h...
The problem is that MC 1.5 and 1.6 appear to be buggy in various areas and don't appear ready for prime time.
Cheers, - Andreas
On 24 Jan 2010, at 19:19, Andreas Raab wrote:
Igor Stasenko wrote:
2010/1/24 keith keith_hodges@yahoo.co.uk:
I look forward to seeing trunk with MC1.5 then, cool
if you remember, i already raised this topic here and in Pharo list. The problem is, that (as you pointing in another thread about capturing knowledge), there's no one left, having such knowledge about MC 1.5., to do such integration.
We did the experiment earlier:
http://lists.squeakfoundation.org/pipermail/squeak-dev/2009-October/140345.h...
The problem is that MC 1.5 and 1.6 appear to be buggy in various areas and don't appear ready for prime time.
Cheers,
- Andreas
I have been using them for years now without trouble.
How you cope with the old MC I dont know. I cant use it, it looses overides, and you have to maintain a separate image for every package you contribute to.
MC1.6 should be less buggy, if traits support was finished.
Keith
On Sun, 24 Jan 2010, keith wrote:
If you are a *developer* of Squeak itself, follow trunk, propose changes to Inbox, and perhaps become a committer to trunk yourself after enough good patches.
THIS IS THE SAME MODEL AS EVERY OTHER SUCCESSFUL OPEN SOURCE PROJECT.
Do you recall that in the last 10 years there was discovered such a thing as Extreme Programming. This is a different model to what you are using.
Keith continues to make it into something it isn't. We're doing *what works*.
May I point out again that it doesn't work for me. My patches to trunk would be fixes for Monticello. These fixes would break trunk, therefore it doesn't work as you suppose.
So these are not patches to trunk.
Levente
Keith
2010/1/24 Levente Uzonyi leves@elte.hu:
On Sun, 24 Jan 2010, keith wrote:
If you are a *developer* of Squeak itself, follow trunk, propose changes to Inbox, and perhaps become a committer to trunk yourself after enough good patches.
THIS IS THE SAME MODEL AS EVERY OTHER SUCCESSFUL OPEN SOURCE PROJECT.
Do you recall that in the last 10 years there was discovered such a thing as Extreme Programming. This is a different model to what you are using.
Keith continues to make it into something it isn't. We're doing *what works*.
May I point out again that it doesn't work for me. My patches to trunk would be fixes for Monticello. These fixes would break trunk, therefore it doesn't work as you suppose.
So these are not patches to trunk.
And besides, MC CAN be updated using MC. We already having a lot of entry points for these: - use SomeClass>>initialize - use image startup/shutdown phases - use package script
Keith, you can look how we managed to replace compiled methods trailer handling without need of using .cs, but just loading a set of MC packages.
Levente
Keith
May I point out again that it doesn't work for me. My patches to trunk would be fixes for Monticello. These fixes would break trunk, therefore it doesn't work as you suppose.
So these are not patches to trunk.
I have done my damnedest to get people to manage packages like Monticello and SUnit externally, but it doesnt work.
Monticello is as far as I know is maintained in trunk. (it shouldnt be), it should be integrated in from elsewhere.
Non of my changes are developed in the kernel. Everything I do is a loadable optional thing.
Keith
On Jan 24, 2010, at 8:08 AM, Randal L. Schwartz wrote:
Why is this viewed so much different from every other open source project?
Do you keep installing the daily release of the Linux Kernel on your server machines? No!
You wait for a major release of Fedora, or Debian, or Ubuntu, which happens every six months or year.
One exception, of course, is bug and security fixes. I *want* to keep my production CentOS box up to date with bug and security fixes. I understand that those changes are going to come on their own timeframe. I do *not* necessarily want to track every new release of the OS.
That, in my view, is a big drawback to the monolithic trunk update stream: it mixes (for example) important bug fixes with (for example) updates and extensions to browsers.
So it would be nice if we had some code/tool/community support for dispersing code broadly to as many "forks" as possible, where "fork" means not just things like Pharo and Cuis but also older versions of Squeak and my little Squeak/Seaside image running on my CentOS box where I want to load as little code as possible so as to lower the chance of breaking something.
David
On 24 Jan 2010, at 06:44, Friedrich Dominicus wrote:
Josh Gargus josh@schwa.ca writes:
The "community" doesn't want only one thing, and different people in it want different things to different degrees. I don't dispute that what you have described above is desirable, in principle, to the vast majority of community members. However, it is fundamentally at odds with other goals that various community members hold dear. A balance must be struck.
You are right, but let's as it that way:
- how many of you do activly work in the "Kernel!"
- how many of you do use it for application development
I would be suprised to see a ratio much higher than 1:10 000 or even 1: 100 000 (kernel dev/application dev).
As I understand Keiths posting he's mainly an application developer and so it's clear that he does not like to re-write his code over and over again (for whatever good/bad technical reason).
It's worse than that.
When there are too many packages all a moving target, being written on too many differing kernels, also moving targets. At some point the task of building an application and maintaining it becomes virtually impossible.
Suddenly there comes a point where the only choice you have is to fork everything!
This is a very hard choice to make if you are not good enough, or you don't have the time to maintain everything.
Of course the gurus Lukas', Andreas and Stefane don't have this problem, so they apparently don't see a need.
There's IMHO no better way to drive away people but to break their code over and over again...
Amen, Amen and Amen.
Here's a very specific example. I would like to see more integrated support for concurrent programming in the Squeak kernel. Toward that end, I've added a trivial implementation of "promises" to the trunk (hopefully, I'll take it further relatively soon... one of the things I've done in the interim was to re-read Mark Miller's dissertation).
Well so you are interested in another thing. Well so you probably do not see the points of Keiths mails.
Regards Friedrich
Josh,
So, your implementation of futures, that sounds useful. My images are all based upon 3.10, so would you be so kind as to package up your implementation in a form that I can actually use in my images. A change set that is load able into 3.10 would be good enough, if you did this then Edgar would use it too I am sure. You see, then I can use your API with my current code base. When the time comes to move my code base to 3.11, the transition will be a smooth one.
thanks in advance
Keith
2010/1/24 keith keith_hodges@yahoo.co.uk:
On 24 Jan 2010, at 06:44, Friedrich Dominicus wrote:
Josh Gargus josh@schwa.ca writes:
The "community" doesn't want only one thing, and different people in it want different things to different degrees. I don't dispute that what you have described above is desirable, in principle, to the vast majority of community members. However, it is fundamentally at odds with other goals that various community members hold dear. A balance must be struck.
You are right, but let's as it that way:
- how many of you do activly work in the "Kernel!"
- how many of you do use it for application development
I would be suprised to see a ratio much higher than 1:10 000 or even 1: 100 000 (kernel dev/application dev).
As I understand Keiths posting he's mainly an application developer and so it's clear that he does not like to re-write his code over and over again (for whatever good/bad technical reason).
It's worse than that.
When there are too many packages all a moving target, being written on too many differing kernels, also moving targets. At some point the task of building an application and maintaining it becomes virtually impossible.
Suddenly there comes a point where the only choice you have is to fork everything!
This is a very hard choice to make if you are not good enough, or you don't have the time to maintain everything.
Of course the gurus Lukas', Andreas and Stefane don't have this problem, so they apparently don't see a need.
There's IMHO no better way to drive away people but to break their code over and over again...
Amen, Amen and Amen.
Here's a very specific example. I would like to see more integrated support for concurrent programming in the Squeak kernel. Toward that end, I've added a trivial implementation of "promises" to the trunk (hopefully, I'll take it further relatively soon... one of the things I've done in the interim was to re-read Mark Miller's dissertation).
Well so you are interested in another thing. Well so you probably do not see the points of Keiths mails.
Regards Friedrich
Josh,
So, your implementation of futures, that sounds useful. My images are all based upon 3.10, so would you be so kind as to package up your implementation in a form that I can actually use in my images. A change set that is load able into 3.10 would be good enough, if you did this then Edgar would use it too I am sure. You see, then I can use your API with my current code base. When the time comes to move my code base to 3.11, the transition will be a smooth one.
thanks in advance
Keith
Keith, you know very well your main options: 1) you take a trunk/pharo/cuis image then reload all your stuff (LPF/Installer/Sake/Package SHOULD be helpful). You may report compatibility problems in relevant list. 2) you start with an unmaintained image from the past, transform MC diffs into change sets and you cherry pick, but you're on your own.
I know very well your problems. I played option 2) during seveal years when releasing a commercial product in VW. I know very well the costs, and I know you are in trouble with this path.
Not that many experienced people are still supporting option 2). Juan still use it because he wants a very fine grain control. The fact that Cuis is minimal may help but he knows how much time it costs. Sorry, but I doubt you'll attract many people on that path.
Would I develop an app today, no doubt it would be with option 1. Installer/Sake could be used as well for option 1, couldn't it ? You have to think about this option too. That's how I see things with option 1: - you select an image when you think added value is worth the effort (like faster file I/O) - you load and run specific Kernel tests expressing the essential expectations of your application here, you can have a go/no go decision. It can also be a maybe: you send requests to developpers or propose your own changes in trunk. - you play your scripts to load your application - you run your application tests and ask for support on regressions and uncompatibilities.
More over, with bob you could automate the tests and automatically know if trunk updates are breaking things or not. So the question of an officially "released" image is less relevant. If you really want this process, adopt Pharo, they are close to 1.0 (without faster streams and other improvments of course...).
Cheers.
Nicolas
Keith, you know very well your main options:
- you take a trunk/pharo/cuis image then reload all your stuff
(LPF/Installer/Sake/Package SHOULD be helpful). You may report compatibility problems in relevant list.
yep, correct.
- you start with an unmaintained image from the past, transform MC
diffs into change sets and you cherry pick, but you're on your own.
yep correct again.
So points to conclude from this.
a. Everyone is on their own.
b. You can't really start doing either, until the 3.11 is released. (So a bi-monthly release cycle would be a better thing than a bi- annual one, I am still waiting for the process that is going to get us a bi-monthly release, tested and documented)
c. If you do start doing it now, you have to continuously track trunk. If you use packages form pharo, you probably have to continually track pharo as well!
d. The packages that you publish which have existing users, you either have to maintain a dual codebase for, one for the old and one for the new base images, or you have to restrict yourself to the lowest common denominator. (if you are forced to do this all the time, then there is no point in moving to a better api)
I know very well your problems. I played option 2) during seveal years when releasing a commercial product in VW. I know very well the costs, and I know you are in trouble with this path.
You are making the mistake that option 1 and 2 are mutually exclusive, they are not.
You only have to back port essential API changes, (like for example, if Pharo publish Author as a loadable package that would load into Squeak, then a load of compatibility problems will be sorted)
Once you have back ported the API, you can move your own code base forward, and continue to support other users still stuck in the past, by offering them the backport patch.
Not that many experienced people are still supporting option 2). Juan still use it because he wants a very fine grain control. The fact that Cuis is minimal may help but he knows how much time it costs. Sorry, but I doubt you'll attract many people on that path.
And the evidence we have is that purely relying on option 1, leaves a trail of forks behind it, (etoys, sophie cobalt, and now soon to be an orphan "beach") for which the cost of porting up to the front is 3 months of unproductive work. Secondly there are now two fronts to port up to, so which do you choose?
All that is needed is to facilitate a migration path, from 3.x to 3.x +1. Note I didn't say you need to provide that migration path, just facilitate it.
You facilitate it by packaging separate innovations separately upfront. (like we did on mantis, but on a less granular scale). This is the basis of the packaging model, just applied to features of the kernel. Then you build your releases by integrating innovations on a regular basis.
Then the knowledge you need to backport an innovation is there if you need it. No cherry picking is needed. Secondly you are not on your own, because if you extend the innovation to provide a backport to say, 3.10. Edgar can come along and contribute his knowledge (by conceptually subclassing your work) and extend it to work for "Minimal" too.
So the model proposed is: you innovate in the latest release, in your context, but you don't publish your innovation in such a way as to restrict its use to only your context. (that would normally be called a changeset (cs), or a deltastreams-delta (ds) You allow others to retarget your innovation to their context, and you provide a place where all users of that innovation pool their knowledge as to how to apply your innovation to all the different contexts. This is what Sake/ Tasks is for, you provide your innovation as one Task, with dependencies and pre-requisites, and others can subclass that Task to tweak it for application to other contexts, which have other dependencies and pre-requisites.
Sake/Packages does this for packages. It collects the data as to what is needed to load a package into Squeak, Pharo, and any other contexts you care to add. However it does this, not just for the current releases, but also past unmaintained releases. The squeaksource.com/ Packages repository becomes a shared resource defining exactly what loads where. If for example I want to load a package into a released version of Minimal, then I can add the fix to the the dependencies in the Minimal packages definitions. At a later date I hope I can persuade Edgar to incorporate the patch in Minimal, but if there are any users of the old Minimal release then they will still need the patch.
Would I develop an app today, no doubt it would be with option 1. Installer/Sake could be used as well for option 1, couldn't it ? You
Yes this is what I use, and have used since 2006.
1. However pier keeps data in image, so the move is not routine. 2. Trunk is still moving, and is alpha so it will not be ready for at least 6 months and i wont have the time for perhaps a year to do it (so trunk remains irrelevant to me for all practical purposes until about a year or more hence) 3. I am not just porting an application to latest, I am supporting packages that have users in many images. So while I may port my stuff to latest, I still have to support my users in 3.x-1. So I end up having to follow option 2 anyway, even to achieve option 1.
The bob process would only integrate release quality finished innovations into the final-release. Conceptually there would never be an alpha release, there is always a stable release X, and the previous stable release X-1, and the 100 or so deltas that move X-1 to X.
have to think about this option too. That's how I see things with option 1:
- you select an image when you think added value is worth the effort
(like faster file I/O)
- you load and run specific Kernel tests expressing the essential
expectations of your application here, you can have a go/no go decision. It can also be a maybe: you send requests to developpers or propose your own changes in trunk.
Its a bit late, since I wont be starting this until 6 months after trunk is released.
- you play your scripts to load your application
- you run your application tests and ask for support on regressions
and uncompatibilities.
But meanwhile I still have to support and maintain the X-1 release. So I still have to back port patches to X-1.
We have a mechanism for patching the current release (its called the release team), but we don't have a mechanism for patching the X-1 release for my existing package users. (that is what LPF was invented for, but if no one uses LPF....)
More over, with bob you could automate the tests and automatically know if trunk updates are breaking things or not.
yep you could.
So the question of an officially "released" image is less relevant. If you really want this process, adopt Pharo, they are close to 1.0
Thats what I was actually thinking of doing, until I actually tried pharo, and discovered they had removed the PackagePaneBrowser. I find OB unusable.
(without faster streams and other improvments of course...).
Cheers.
Nicolas
Keith
On Jan 23, 2010, at 10:44 PM, Friedrich Dominicus wrote:
Josh Gargus josh@schwa.ca writes:
The "community" doesn't want only one thing, and different people in it want different things to different degrees. I don't dispute that what you have described above is desirable, in principle, to the vast majority of community members. However, it is fundamentally at odds with other goals that various community members hold dear. A balance must be struck.
You are right, but let's as it that way:
- how many of you do activly work in the "Kernel!"
- how many of you do use it for application development
I would be suprised to see a ratio much higher than 1:10 000 or even 1: 100 000 (kernel dev/application dev).
As I understand Keiths posting he's mainly an application developer and so it's clear that he does not like to re-write his code over and over again (for whatever good/bad technical reason).
I just can tell you a story from Eiffel wonderland where this ratia surely was much more in favour of "application" developers. One development team in Eiffel has broken old code with nearly every "minor" update. This means software once written and "working" just stops. If you ever have encountered that, you surely will understand Keiths points very well.
There's IMHO no better way to drive away people but to break their code over and over again...
Fair enough. These are eminently valid concerns. I'm not yet convinced that the trunk approach is doomed to break everything, or that Keith's approach is necessarily any better. However, I do want to end this smoldering flame war once and for all. So, I guess we're going to have to get to the bottom of this...
Here's a very specific example. I would like to see more integrated support for concurrent programming in the Squeak kernel. Toward that end, I've added a trivial implementation of "promises" to the trunk (hopefully, I'll take it further relatively soon... one of the things I've done in the interim was to re-read Mark Miller's dissertation).
Well so you are interested in another thing.
Nothing wrong with that, right?
The goals of my post were as follows: - to establish clearly that compatibility is not the only thing that the community cares about (it also cares about "progress") - to determine whether Keith acknowledges this fact - if so, to determine whether his approach may address the issue in some way that I missed
Well so you probably do not see the points of Keiths mails.
But I do, at least regarding the outcomes that he hopes to avoid. I'm just not convinced that his proposal the right way to go, but I'm making an honest effort to understand the issues.
Cheers, Josh
Regards Friedrich
-- Q-Software Solutions GmbH; Sitz: Bruchsal; Registergericht: Mannheim Registriernummer: HRB232138; Geschaeftsfuehrer: Friedrich Dominicus
Hello Again Josh,
The goals of my post were as follows:
- to establish clearly that compatibility is not the only thing that
the community cares about (it also cares about "progress")
- to determine whether Keith acknowledges this fact
We were clearly told by the board years ago, that stella-progress was going to come in Squeak 5.0. In fact squeak 5.0 would have more progress than you could shake a stick at. They were so confident of this fact that at one point they cancelled 3.x development altogether.
It is common for open source projects to maintain two branches, the red/blue pills, the blue/pink planes etc.
Squeak 5.0 is the place for progress, 3.x is the place for stability. Simple as.
So as an application developer, I don't want progress that does anything at all to rock the boat, I want stability increases and speed improvements month on month that is all. Anything else is not progress, its a pain in the rear.
Fonts and traits I can do without. I have nothing against progress that has been thought about, and tested fully and is optional for me to load. (its called a package, every innovation can be delivered as a package, even a changeset can be delivered as a loadable package)
Every innovation in the "3.x stable plane" should be developed, tested, COMPLETELY FINISHED and made loadable into 3.10 (and 3.9) since they are practically the same, so that all legacy code in 3.10 and 3.9 continues to work and there are no surprises.
"trunk" is the pursuit of random "progress", on the fly, hacking, without thinking in advance, and without making the knowledge available in a usable form for anyone who is not in the "trunk" fork, and without a continuous testing framework. Trunk is purposefully a fork away from (3.9 and 3.10) And I cant tell my clients what is coming in 1 months time let alone a years time.
If you want progress without compatability, go and nag Craig, who said he would deliver Squeak 5.0 18 months ago. Andreas should have supported, worked with and annoyed Craig, not me. All of "trunk" effort should be producing 5.x on top of spoon, not 3.x.
- to determine whether Keith acknowledges this fact
- if so, to determine whether his approach may address the issue in
some way that I missed
So yes I think you missed the point of my belief that we are supposed to be supporting squeak as a professional development product with a professional attitude.
Currently the attitude is, release the image, forget about it, and move on to the next release, which will probably not be compatible with the previous one, and definitely will not have a migration path for you, sure we might fix some stuff but if you want to use it, you have to take all the pain of keeping up.
3.10 as a release should be a stable supported release, with fixes and improvements that do not break compatibility or continuity in 3.11 3.12 etc etc. The 3.x team is responsible for providing 3.x-1 users a migration path, and the easiest way to achieve this is to make all 3.x- innovations, optional loads into 3.x-1. Its not hard, its just a matter of making the choice not to group-hack.
So when a professional developer starts using 3.10, he is continuously supported, with bug fixes, managed in a bug fix database, and new versions, all of which maintain compatibility.
So the board's first responsibility is to support the existing users of squeak, by making sure that the maintained version is maintained, and "progress" occurs within the capabilities of the existing users. I do not have the ability to load closures into 3.10 on my own, this is a serious issue. By not insisting that closures are loadable into a raw 3.10 the board is letting me down.
Secondly they want to make a brand shiny new product, to attract new users with new flashy capabilities. However, it is absolutely stupid to use one as a club to kill the other.
Given that the "trunk" is not providing the migration path, it is a year away form being ready for me to use, and there is no ongoing support for me as a 3.10 user. I am very concerned that squeak was a bad choice to make as a development tool, that I had the cheek to sit in meetings with clients and say, its ok, we can develop stuff and it will keep going for years to come.
Keith
On Jan 24, 2010, at 1:50 PM, keith wrote:
Hello Again Josh,
The goals of my post were as follows:
- to establish clearly that compatibility is not the only thing that the community cares about (it also cares about "progress")
- to determine whether Keith acknowledges this fact
We were clearly told by the board years ago, that stella-progress was going to come in Squeak 5.0. In fact squeak 5.0 would have more progress than you could shake a stick at. They were so confident of this fact that at one point they cancelled 3.x development altogether.
True. I'm not sure it there's a point being made here, or if this is just a lead-in?
It is common for open source projects to maintain two branches, the red/blue pills, the blue/pink planes etc.
Squeak 5.0 is the place for progress, 3.x is the place for stability. Simple as.
So if trunk was renamed 4.0 or 5.0, you'd be happy?
So as an application developer, I don't want progress that does anything at all to rock the boat, I want stability increases and speed improvements month on month that is all. Anything else is not progress, its a pain in the rear.
Fonts and traits I can do without. I have nothing against progress that has been thought about, and tested fully and is optional for me to load. (its called a package, every innovation can be delivered as a package, even a changeset can be delivered as a loadable package)
Every innovation in the "3.x stable plane" should be developed, tested, COMPLETELY FINISHED and made loadable into 3.10 (and 3.9) since they are practically the same, so that all legacy code in 3.10 and 3.9 continues to work and there are no surprises.
"trunk" is the pursuit of random "progress", on the fly, hacking, without thinking in advance, and without making the knowledge available in a usable form for anyone who is not in the "trunk" fork, and without a continuous testing framework. Trunk is purposefully a fork away from (3.9 and 3.10) And I cant tell my clients what is coming in 1 months time let alone a years time.
You will never be able to tell your client what's coming in a year, because the work is being done by volunteers.
If you want progress without compatability, go and nag Craig, who said he would deliver Squeak 5.0 18 months ago. Andreas should have supported, worked with and annoyed Craig, not me. All of "trunk" effort should be producing 5.x on top of spoon, not 3.x.
This doesn't make any sense. Spoon is apparently not coming. Why should that prevent me from having my "progress without compatibility" ;-)
- to determine whether Keith acknowledges this fact
- if so, to determine whether his approach may address the issue in some way that I missed
So yes I think you missed the point of my belief that we are supposed to be supporting squeak as a professional development product with a professional attitude.
Um, wow...
What just happened? I stated that not everyone holds cross-fork compatibility as their highest goal, and asked whether you acknowledge this fact. You responded by saying that I don't seem to understand that your highest goal is to support Squeak as a professional development product. How is it possible for you to read what I've written, and say that I don't see that you are primarily concerned with compatibility. I mean, it's all you talk about. I'm looking back through the emails I've written, and my understanding of your general stance of "never break code, ever" shines through everywhere.
Unless...
Maybe you actually mean what you just literally said above: that *we* (including me, Josh) are supposed to be supporting Squeak as a "professional development product", and if that's not what we're doing, we're shirking our duties. Maybe you really are suggesting that I am obliged to support your vision? If so, think again... I'm under no obligation to do what you want. If not, then what on earth are you talking about?
Currently the attitude is, release the image, forget about it, and move on to the next release, which will probably not be compatible with the previous one, and definitely will not have a migration path for you, sure we might fix some stuff but if you want to use it, you have to take all the pain of keeping up.
If this was my attitude, I wouldn't be spending so much time looking for common ground.
3.10 as a release should be a stable supported release, with fixes and improvements that do not break compatibility or continuity in 3.11 3.12 etc etc. The 3.x team is responsible for providing 3.x-1 users a migration path, and the easiest way to achieve this is to make all 3.x-innovations, optional loads into 3.x-1. Its not hard, its just a matter of making the choice not to group-hack.
So when a professional developer starts using 3.10, he is continuously supported, with bug fixes, managed in a bug fix database, and new versions, all of which maintain compatibility.
That's desirable, no doubt. That's why I keep spending time here.
So the board's first responsibility is to support the existing users of squeak, by making sure that the maintained version is maintained, and "progress" occurs within the capabilities of the existing users.
Whoa. Where did that idea come from? Each board member is responsible to do what they promised in their election platform. For most, supporting existing users is definitely a large part of it. It's only in your head that this is, without question, the number one responsibility.
I do not have the ability to load closures into 3.10 on my own, this is a serious issue. By not insisting that closures are loadable into a raw 3.10 the board is letting me down.
The board cannot insist anything, they can only request. What would they say? Eliot, you better create a closure bootstrap for 3.10, or... what?
Secondly they want to make a brand shiny new product, to attract new users with new flashy capabilities. However, it is absolutely stupid to use one as a club to kill the other.
I'm guessing that you're talking about using the shiny trunk as a way to kill the trusty 3.10. I don't quite understand your analogy though, so I won't say anything further.
Cheers, Josh
Given that the "trunk" is not providing the migration path, it is a year away form being ready for me to use, and there is no ongoing support for me as a 3.10 user. I am very concerned that squeak was a bad choice to make as a development tool, that I had the cheek to sit in meetings with clients and say, its ok, we can develop stuff and it will keep going for years to come.
Keith
squeak-dev@lists.squeakfoundation.org