[ANN] BrowseUnit v4 preview: feedback needed
Colin Putney
cputney at wiresong.ca
Tue Mar 9 18:11:57 UTC 2004
On Mar 5, 2004, at 4:01 PM, Markus Gaelli wrote:
> Hi Colin,
>
>>> If you have composed method examples/tests, you could enhance
>>> BrowseUnit/XUnit to only run the necessary high level tests, so that
>>> all included tests are checked along the way and are not executed
>>> stand alone.
>>
>> I don't see how this would be an improvement. In the example I
>> posted, each test gets run exactly once. Even if you enhanced SUnit
>> to keep track of which tests had been run as part of a higher-level
>> test, you'd still execute the lower level tests multiple times–once
>> for every high-level test that called them.
> So assumed I used the cached version of method examples from the end
> of my last mail, would we then agree? Assumed also that all caches are
> flushed at the beginning of a new test run?
With caching, yes I do agree that you'd not increase the testing time,
and might actually reduce it. The cost of this is complexity. Your
tests must return meaningful values, and they must cache those values
in case they get run more than once. You also have to make the SUnit
framework preserve test identity so that the caches work correctly.
But SUnit already provides caching - TestResources. I still don't see
what test composition provides that well-factored tests and test
resources don't.
>>> - get your assertions failing -> find your bugs in the most
>>> specific context
>>
>> I think mocks are a better way to achieve this.
> Why? I am fully aware of the usefulness of mocks when it comes to mock
> up external things like databases or creation of classes ;-)
> but I really doubt if they are a good idea when you want to create
> scenarios where you have complete control over them.
I think you're confusing mocks with fakes. (I did as well until fairly
recently. The "mocks" in the Monticello tests should really be called
fakes.) The point of a mock is to fail the test as early as possible,
so that you have the correct context when debugging.
Imaging a MockWriteStream for example. You give it a reference
collection, and when it receives #nextPut: messages, it compares the
parameter to it's collection instead of storing it. If the expected
sequence is violated, it throws a TestFailed exception, so your
walkback shows exactly what code is writing incorrect data. I wrote a
Mocket ("mock socket") class for my (unfinished) VNC package.
As another example, OmniBrowser includes a ProtocolMock class that I
use to verify that nodes use the context protocol correctly. It uses
#doesNotUnderstand to verify that it receives the correct sequence of
messages, and fails immediately when it receives an unexpected message.
It makes debugging very easy because I can see where that message was
sent.
> Want to join my efforts to use Andreas Raabs MethodsAsObjects as a
> damn fast MethodWrapper?
> See http://kilana.unibe.ch:8888/@hBnEVpKIcgNQZHaK/fMAXfuRy
>
> I actually tried to cover the nice and complex (though a bit mocky ;-)
> Monticello tests, but perform: could not yet be correctly covered.
Love to. I'd really like to see a nice coverage tool for Squeak.
Colin
More information about the Squeak-dev
mailing list
|