Process, harvesting, getting your favorite things in the image

Daniel Vainsencher danielv at netvision.net.il
Tue Mar 11 05:39:54 UTC 2003


Hi Richard. Glad you're looking into this.

"Richard A. O'Keefe" <ok at cs.otago.ac.nz> wrote:
> I was assured that these annotations were reviewer annotations.
Normatively. I don't have an expectation that authors assert them, I
have an expectation that *someone* assert them before the Harvesters
take a look.

> Not so, NONE of them are such that they could never be meaningfully
> asserted by the original submitter, and most of them are such that
> they should normally be assertable by the original submitter.
Descriptively quite true. But the last thing I want to do is to cause
people to delay/worry about tags before releasing their work. Authors
should worry only about content and quality, and minimally teasing
someone into doing review. 

Of course, if wanting to get quality right makes them keep those tags in
mind, and maybe even use SUnit or SLint themselves, I'll be only happy.
And if they add at least some the tags, it might speed process for their
specifici contributions. But I wish to *impose* as little as possible on
authors.

Fair enough?

> I am not saying that these are bad quality assurance attributes,
> far from it.

Glad you like them. Please join us also as a reviewer when a package
seems promising.

> Hannes Hirzel <hannes.hirzel.squeaklist at bluewin.ch> wrote:
> 	[sm]  Small. (Changesets should be under 10k.)
> 
> A	In fact, what is the point of this at all?  This is precisely the
> 	kind of "QA" measurement the computer can do best.
I actually don't care about how the metric is computed. Please
understand that the critical issue at hand is that the metric be
available to the harvester before the selection of his next reviewed
piece. Therefore, it should be available immidiately, without going
through any links. Anything else will pervert the process by forcing the
Harvester to start investing in a changeset before deciding it is worthy
of focus. I agree, ideally, it should be processed by the sqfixes
archive, or added by Squeak's mail out changeset feature, in order to
take the burden off the humans' shoulders.

> a	[cd] Changes documented (Reasoning is given that explains every
> 	change made)
> 	This would be an A except that the author might be wrong.
Note that I do expect that often the author will be the documenter of
changes, but I'd prefer to know that someone else considers the
documentation complete.
 
> A	[sl]  SLint approved (You don't have to do what SLint says(sometimes
> 	it's wrong), but have a good reason why you didn't)
> 	Presumably the author and reviewr might disagree about what's a good
> 	(enough) reason.
Definitely. SLint has many different rules, with different levels of
reliability. After we find out what the trouble makers are, we might
somehow deprecate them. I could add silly UI features in the SLint that
encode this knowledge.

> 	Sigh.  I guess I'll have to learn about SLint.
Muwahaahahaa!!! another day, another convert ;-)
I'm curious to hear what you think about it.

> r	[er]  Externally reviewed (Design + code, by someone other than the
> 	author, quite knowledgeable about the package)
> 
> 	Although note that the author *could* very well say at the time of
> 	submission that someone else has reviewed it.
Yes. Having the reviewers comments directly available and permanently
archived (in the contents of his followup mail) would be slightly more
comforting, but it isn't a critical difference.

> A	[et]  Externally tested (Import into a fresh image; generally making
> 	sure it doesn't break anything that uses it; run relevant existing SUnit
> 	tests. (Implies [er] and[cd])). 
> 
> 	It's not clear what "external" means here.  As described, there is no
> 	reason why it can't all be done by the author.  
Programmers are notoriously bad testers, and more so of their own code.
They don't enter invalid input, they don't wander what to do with the
garbled GUI (they know), and they don't exercise code paths they're not
aware of and therefore broke when they made their changes. If the tester
is external, random chance will be our aid. If he's sufficiently
malicious, mean and wacky, he might even do a good job of it.

> It is far from clear
> 	why it is said to imply cd.  Does does "tested" imply "documented"?
If tester doesn't know what's changed, how can he possibly test it? If
he doesn't know the author's assumptions, how will he find their holes?

> 	If "external" means "by some other person", once again, there is no
> 	reason why the author could not assert that something had been tested
> 	by a third part _before_ submission.
True, see previous tag for caveat.

> A	[su]  Covered by and passes SUnit tests, either included or external. 
> 	Included tests should be described, and external tests should be pointed
> 	to.
A second opinion on what coverage entails is not a bad thing, though
claiming the tag is a good enough way to invite such review.

Daniel



More information about the Squeak-dev mailing list