[squeak-dev] Porting JUnit's Theories to SUnit?

Levente Uzonyi leves at elte.hu
Mon Jul 11 20:27:29 UTC 2011


On Mon, 11 Jul 2011, Frank Shearar wrote:

> Has anyone looked into porting JUnit 4's Theories into SUnit? (NUnit
> also uses theories, in 2.5)
>
> In brief, a Theory is a test that takes a parameter. So what before might say
>
> testMyFooPrintsIntegersHomoiconically
>    -1 to: 1 do: [:i | self assert: i myFoo = i printString
> description: 'Failure for integer ', i printString]
>
> becomes
>
> testMyFooPrintsIntegersHomoiconically: anInteger
>    self assert: anInteger myFoo = anInteger printString description:
> 'Failure for integer ', anInteger printString
>
> You define a bunch of DataPoints, and then the runner runs that test
> for every data point. In JUnit data points are defined through
> constants with @DataPoint/@DataPoints annotations, but of course we
> can do them however we want. Further, theories can make assumptions,
> which are essentially pretest filters. For instance, in a TestCase
> dealing with real algebra, a test for square roots might say
>
> testSquareRootReturnsRoot: anInteger
>    self assumeThat: [anInteger > 0].
>    "Rest of test"
>
> and then the test would only run on positive data points.
>
> The essential idea is simply decoupling the test itself - the theory -
> from the data, so you don't have to roll your own looping construct
> when testing multiple data points.

I usually roll my own loops and use a single test method for a gazillion 
different cases. This style has the drawback that if you're not running 
the tests yourself, then you won't know which "subcase" is failing. So I 
see some value in Theories, if the test runner can tell which "subcase" 
(datapoint) failed.

AFAIK our version of SUnit is a modified version of SUnit 3 (which is not 
the latest and greatest) and I miss some basic features of the test runner 
(and the framework itself), so enhancing it is welcome. The features I 
miss the most are:
- differentiate between timeouts and failures
- save the process for each failure/error (as a partial continuation?) 
and resume that instead of re-running the test (which may pass on the 
second run) when check the failing test
- measure the runtime of each individual test
- easily create a report of the results


Levente

>
> frank
>
>



More information about the Squeak-dev mailing list