A process proposal for 3.10

Diego Fernandez diegof79 at gmail.com
Fri Oct 20 13:52:01 UTC 2006


I want to add my 2 cents to the discussion.
Keith Hodges' idea sounds good, but maybe making a full featured test
server takes a big effort. I want to propose a more pragmatic
solution, based in my experience at Mercap.

At Mercap, before integrating changes to the main thunk we run some
tests, that are categorized as follows:

Tests that covers functionality:
     - Unit Tests: There are a lot of them (currently 11000+), they
are really unitary (an example are the Aconcagua/Chalten packages
published by Hernan/Maxi T), and we run them very often during
development
     - "User Story" Tests: We call in that way to the tests focused in
"user stories", they touches a lot of "modules" in the system, just to
cover an "user story".
    - Functional Tests: We call in that way to the tests that cover
the UI. They are really slow, and we don't run them very often.

Tests for "coding standards" (I don't know how to call them in English):
     - "Coding standard tests": They tests how applications are named
(more or less the equivalent to Squeak "package" or "System
Categories"), how classes are named, if tests classes are in the wrong
application.. etc
     - "Code Quality": They check a defined set of SmallLint rules,
and test coverage (we only check for 100% coverage of some
applications, because this test can be an overkill to keep green)
     - "Architecture tests": They test things like: application
dependencies (In Squeak could be if an MC package declares all the
dependencies), if there are invalid references in the System
dictionary, if all methods are well compiled, etc

The names of the taxonomy are somewhat arbitrary but the important
points to me are:
1. In Squeak the unit tests take too long, so programmers are not
encouraged to run them very often
2. The taxonomy of unit tests (they are different test suites) allows
us to run selected parts when necessary
3. The "coding standards" are really useful, because they allow us to
catch potential bugs early (a sender of #halt, senders of a message
that is bogus or deprecated, unimplemented subclassResponsibility,
etc)

Translated to Squeak:
- I think that should be necessary to split the tests in different
categories. Unit tests should be really unitary, and fast. Other tests
that take a lot of time should be in other test suite.
- It would be cool to make a tool to do automated package testing on release.
An use case could be more or less like this:

1) A programmer wants to release an MC package, so him goes to the
World Menu an chooses something like a "Package Release Wizard" ;)
2) "Package Release Wizard" takes the package, and runs all the Squeak
unit tests, and them runs some "coding standards" tests on the
package.
3) If the tests pass, releases the package.
4) The programmer could release the package by hand even when the
tests fails but him is aware of this, and should be comment it in the
change log.

I think that this approach is more simple that the test server.
The "Package Release Wizard" could be initially an object that ask for
the package name and instantiates an abstract test case for that
package.

The main problem to me, is that unit tests in Squeak are not really
unitary. So the Squeak "programming community" doesn't have the
practice to run them more often.

Regards.
Diego.-


On 10/19/06, Ralph Johnson <johnson at cs.uiuc.edu> wrote:
> The release process needs to be more automated, and Keith Hodges' idea
> seems pretty reasonable.
>
> -Ralph Johnson
>
>



More information about the Squeak-dev mailing list