← Back to team overview

launchpad-dev team mailing list archive

Re: Should a test case leave behind OOPS reports?

 

On Mon, May 10, 2010 at 7:51 PM, Maris Fogels
<maris.fogels@xxxxxxxxxxxxx> wrote:
> On 05/10/2010 12:44 PM, Andrew Bennetts wrote:
>>
>> Maris Fogels wrote:
>>>
>>> Hi all,
>>>
>>> Question for the room: should a test case leave behind OOPS reports
>>> in /var/tmp/lperr.test?  Would detecting that a test left behind
>>> OOPS reports when it was not supposed to increase the quality of our
>>> code?
>>
>> My guess:
>>
>> No and yes, respectively.
>>
>> -Andrew.
>
> Thanks Andrew.  Sounds good to me :)
>
> So, a follow-up question then: would it be too much work to add such
> checking to the Launchpad test suite?
>
> Here is the complete list of OOPS Exception-types I saw on the hung server:
> http://people.canonical.com/~mars/oopslist.txt
>
> Looking through the list of OOPSes, there are a few head-scratchers, and a
> few that are entirely to be expected.  I suspect the 404 "Object Not Found"
> errors are fine to leave.  Some of these are blank Runtime errors, probably
> raised on purpose by tests (the tracebacks are missing: I have no way to
> tell).  I have no idea what to think about MemoryError OOPSes.
>
> Checking for new OOPSes after a test is simple.  Figuring out which ones are
> expected is not.  How could we do this in a simple, unobtrusive way?
>
> We could change TestCase.assertRaises() to do OOPS tracking.  Or the Zope
> testbrowser itself can track the errors it generates (404 errors mostly).
>  The programmer would get OOPS tracking for free.
>
> In unit tests we could use a @decorator or TestCase property to signal that
> certain types of Exception generated by a unit test should be ignored.  We
> can do this on a test-by-test basis by writing:
>
>  @expect_oops(RuntimeError)
>
> That is about the most effortless way I can think of to flag something.
>
> Everything not covered by the two cases above could raise a warning.
>
> Sound sane?
>

Almost.

We have a similar situation in the Twisted test suite. Any logged
error fails a test, however we need to be able to test that things log
errors appropriately.

We use a more imperative approach than the one you outline. Instead of
decorating a test saying that it expects a logged error of a
particular kind, we do this:
  def test_foo(self):
    # ...
    loggedErrors = self.flushLoggedErrors(RuntimeError)
    self.assertStuff(loggedErrors)

There are a couple of reasons for this.

First, it turns out that you often want to check more things than just
the type of error.

Second, in integration tests, it's often useful to be able to clear
the log more than once.

This isn't an argument against having a decorator, mind. It's an
argument on having the decorator built on a lower-level bit which is
available to test authors.

Also, there's a nice way of attaching uncaught OOPSes to the test
results. Bazaar do a similar thing to include the .bzr.log bits for
failing tests. Talk to Rob or Jelmer or me if you want to know more.

jml



References