SUnit: Skipping tests?

Bert Freudenberg bert at impara.de
Mon Mar 27 11:23:19 UTC 2006


Am 27.03.2006 um 12:54 schrieb Markus Gaelli:

>
> On Mar 27, 2006, at 12:28 PM, Bert Freudenberg wrote:
>
>>>> And you are suggesting to indicate clearly, which tests depend  
>>>> on some external resource?
>>>
>>> Well, really, what I'm looking for is something that instead of  
>>> saying "all tests are green, everything is fine" says "all the  
>>> tests we ran were green, but there were various that were *not*  
>>> run so YMMV". I think what I'm really looking for is something  
>>> that instead of saying "x tests, y passed" either says "x tests,  
>>> y passed, z skipped" or simply doesn't include the "skipped" ones  
>>> in the number of tests being run. In either case, looking at  
>>> something that says "19 tests, 0 passed, 19 skipped" or simply "0  
>>> tests, 0 passed" is vastly more explicit than "19 tests, 19  
>>> passed" where in reality 0 were run.
>>>
>>> Like, what if a test which doesn't have any assertion is simply  
>>> not counted? Doesn't make sense to begin with, and then all the  
>>> preconditions need to do is to bail out and the test doesn't  
>>> count...
>>>
>>> In any case, my complaint here is more about the *perception* of  
>>> "these tests are all green, everything must be fine" when in  
>>> fact, none of them have tested anything.
>>
>> Other Unit Test frameworks support skipping tests. One pattern is  
>> to raise a SkipTest exception, in which case the test it added to  
>> the "skipped" list.
>>
>> The good thing about implementing this with exceptions is that it  
>> would work nicely even if the particular test runner does not yet  
>> know about skipping.
>>
>> Another nice XPish thing is to mark tests as ToDo - it's an  
>> expected failure, but you communicate that you intend to fix it soon.
>>
>> See, e.g., http://twistedmatrix.com/projects/core/documentation/ 
>> howto/policy/test-standard.html#auto6
>
> So the circle is closing... exceptions and preconditions again! ;-)
> So Andreas, want to introduce some ResourceNotAvailable and ToDo  
> exceptions ;-) , or do we get away without them and just throw a  
> PreconditionError that I was suggesting in an earlier thread?

It's all about communicating the test writer's intent to the test  
runner. And I think I'd prefer "x tests, y passed, z skipped" as  
Andreas suggested.

> As said in the previous mail ToDo's could be easily figured by just  
> sticking to the convention not to even start the method under test,  
> which is a good idea in that case anyhow.
> As a nice effect one would not even have to touch the tests later  
> when the method under test gets implemented.

However, you wouldn't get the "unexpected success" mentioned in the  
link above.

- Bert -




More information about the Squeak-dev mailing list