Not bug tracking (Re: Bug Tracking (Re: Squeak 2.8 "finalization"))

Paul McDonough wnchips at yahoo.com
Tue Nov 14 01:42:33 UTC 2000


The automated tests from Camp Smalltalk may be worth a
look.  Iirc, the ANSI group more or less finished the
first swipe, i.e. there's at least one assertion
tested per selector called out in the ANSI spec.  That
covers a fair bit of ground:  collections, magnitudes,
streams and the like.

I'm not saying that _passing_ all those tests should
be a 'blessing condition' for Squeak necessarily, just
that it'd certainly be an interesting benchmark - and
it's already there for free.

--- Bijan Parsia <bparsia at email.unc.edu> wrote:
> Hmm. I'm going to condense my answers here.
> 
> On Tue, 14 Nov 2000, Andreas Kuckartz wrote:
> 
> > > to nail down a functional test/QA strategy, both
> technical and social.
> > 
> > Let me suggest to use a single public facility to
> track bugs such as
> 
> We actually have one...the squeak list. It's bugs
> and fixes are collected
> on the sqfixes site. Granted this isn't perfect...
> 
> Be that as it may, this, and the Squeak World tour
> are really orthoganal
> to what I wrote about (although there may be overlap
> with both).
> 
> I guess it's mostly procedural: How do we determine
> that a Squeak release
> is ready to roll as a final release? Until now, Dan
> checked it out an
> signed off on it. It seems to me that he's busy
> enough, and Squeak is
> complex enough, that sharing the burden seems
> reasonable. Only fixes
> should go into a gamma release and only those that
> either break expected
> functionality for that release or are so
> horrifically painful that we'd be
> horribly embarressed to leave them in. What are some
> of the things I
> envision:
> 
> 	1) Browsing around in Scamper.
> 	2) Changing preferences.
> 	3) Various window manipulations.
> 	4) MajorShrink works.
> 	5) InterpreterSimulator works.
> 	6) Various examples work.
> 	7) The PlayWithMes aren't SufferWithMes.
> 	8) Normal work for a week seems to general go as
> expected on
> 	   the major platforms.
> 	etc...
> 
> Of course having automated tests for lots of this
> would be nice (and
> certainly part of the Squeak World Tour mandate),
> but that's not going to
> happen for a while (in full glory). A lot of this
> stuff is just someone,
> preferably someone familar with Squeak or the
> particular bit of Squeak,
> exercising that bit.
> 
> Note that the *sole* goal would be to make sure
> final releases are
> reasonably ok. This is nothing different than what
> Dan's currently asking,
> except slightly more formalized to ensure
> timeliness, coverage, and
> sharing of load. Oh, also, instead of silence (i.e.,
> if no one says
> anything it must be ok) there would be positive
> feedback (I've browsed a
> billion sites with Scamper and it's fine (well,
> except for the indentaiton
> bug I've reported :)).
> 
> Ideally, the final Q/A group would come to consist
> of authors of major
> separate packages as well, who could use that time
> (say a week) to simply
> check to see if their package ran fine. With their
> positive say, we could
> even include a list of "tested for this release"
> packages.
> 
> The work wouldn't be that heavy: When a gamma
> ("final candidate") release
> is annouced, you'd have to download a fresh copy and
> do your testing. If
> there's a problem, you'd need to report and/or fix
> it. You'd also have to
> be ready to retest if some deemed needed fixes came
> out.
> 
> (For example, I didn't even notice that I was using
> an older alpha 2.8
> since before gamma 2.8 came out. So here I am for
> weeks and weeks not
> testing anything useful :))
> 
> Anyway, if there's interest or this seems
> reasonable, I'll happily
> participate or even (eek!) take some sort of
> charge[1] :)
> 
> Cheers,
> Bijan Parsia.
> 
> [1]The low stress sort!
> 


__________________________________________________
Do You Yahoo!?
Yahoo! Calendar - Get organized for the holidays!
http://calendar.yahoo.com/





More information about the Squeak-dev mailing list