[ANN] BrowseUnit v4 preview: feedback needed

Markus Gaelli gaelli at emergent.de
Fri Mar 5 21:01:09 UTC 2004


Hi Colin,

>> If you have composed method examples/tests, you could enhance 
>> BrowseUnit/XUnit to only run the necessary high level tests, so that
>> all included tests are checked along the way and are not executed 
>> stand alone.
>
> I don't see how this would be an improvement. In the example I posted, 
> each test gets run exactly once. Even if you enhanced SUnit to keep 
> track of which tests had been run as part of a higher-level test, 
> you'd still execute the lower level tests multiple times–once for 
> every high-level test that called them.
So assumed I used the cached version of method examples from the end of 
my last mail, would we then agree?
Assumed also that all caches are flushed at the beginning of a new test 
run?
>> Thus you
>> 	- reduce testing time
>
> No, you increase it.
Not if  the cached variant is used. And I doubt it for the other one 
also. But I could only provide indications for that and no hard facts:
An average 50% of tests in some case studies I have done have the 
following property: The set of called method signatures is included in
the set of called method signatures of at least one other test. And for 
computing the dependency graph you don't even need to use method 
wrappers
if you have tagged method examples, you just need methods sent. But to 
make a fair performance comparison to your solution, I think that you 
would have to remove the caching parts also.
>> 	- get your assertions failing -> find your bugs in the most specific 
>> context
>
> I think mocks are a better way to achieve this.
Why? I am fully aware of the usefulness of mocks when it comes to mock 
up external things like databases or creation of classes ;-)
but I really doubt if they are a good idea when you want to create 
scenarios where you have complete control over them.
>
>> 	- improve our design: we might think about providing methods in you 
>> real program which we can exemplify to get some nice objects:
>
> I think this is a general benefit of careful testing, whether or not 
> your tests are composed.
Composition was not my point here. My point was the metaphor of (method 
examples <-> methods) leading to less complex code
in the tests and more complex code in the program.
>
>> 	- might thus get rid of some test artifacts / helper methods (not 
>> sure about this one)
>
> Not sure about that one either. I don't notice many test artifacts in 
> my production code - it just uses the same interfaces as the test 
> code. I definitely notice that extensive testing has a design impact 
> on my code, but I suspect that's a positive thing. It's hard to say, 
> though, what the code would look like if it wasn't tested.
>> How could this be done? Say we somehow tag the unit tests which are 
>> method examples. Then we should be able to detect their call graphs.
>> And then only call the necessary ones to cover the whole graph.
>
> I did something like this using MethodWrappers a while back. The idea 
> was to build a coverage analyzer out of it, but I never got around to 
> finishing it. It might be worth dusting off.
Want to join my efforts to use Andreas Raabs MethodsAsObjects as a damn 
fast MethodWrapper?
See http://kilana.unibe.ch:8888/@hBnEVpKIcgNQZHaK/fMAXfuRy

I actually tried to cover the nice and complex (though a bit mocky ;-) 
Monticello tests, but perform: could not yet be correctly covered.

Cheers,

Markus



More information about the Squeak-dev mailing list