← Back to team overview

launchpad-dev team mailing list archive

Re: Legacy, performance, testing: 6 months of new critical bugs analysed

 

Hi Martin,

Thanks for reviewing the document.

On 11-10-24 02:01 AM, Martin Pool wrote:
> Thanks, that's really interesting to see.
> 
> Incidentally to get the editable form (again if you are at
> canonical.com) you must actually use
> <https://docs.google.com/a/canonical.com/document/d/1GNgTwk62WzG9oIN91bTZI4fNwfylYiSdXC56y9i_riQ/edit?hl=en_US>
> otherwise you are redirected to a new empty document.
> 
>> 28% of bugs would have been prevented by proper testing
> 
> I guess this is almost guaranteed to be true for bugs that aren't
> deployment, load, etc related, that you could possibly have written a
> test to catch them.  I am interested in what would count as 'proper',
> ie the spectrum between
> 
> - it wasn't tested at all, through to
> - some cases were tested but others weren't and they failed, to
> - in theory you could have written a test but it would have required
> unusual foresight
> 

I invite you to look at the data for the definitive answer and to see if
you agree with my categorization :-)

But from my analysis, I'd say it's a mix of 1 and 2. In fact, the 28%
related to testing is an aggregation of three categories:

* missing_unit_test (5)
* missing_interface_test (2)
* missing_integration_test (6)

Under missing unit test, falls thing that would have been caught if a
strict TDD would have been used. In two bugs, we are talking about a
typo in the code (bug #754089, #837568). In the three others, we are
talking about one side of a logic condition that was missing coverage
(#796705, #812583, #823473)

The missing interface tests simply means that the bug lied in a object
that extended or implemented an existing interface and there were no
tests asserting that the interface was implemented properly (#751982,
#834082).

The integration test category is a little less clear-cut, as in
integration there are multiple ways in which things can fail to
integrate, and it only becomes true in a hindsight.

But in that category, I mostly put places where we had business rules
that need supporting, but these rules weren't documented nor enforced
(through tests).

I'd say that half were untested business rules:

   * Notifications should work for teams (#784948)
   * IntegrityError in transaction should be retried (that was unit
tested, but failed in practice) (#812176)
   * We support whitespace in attachment URLs (#825458) (maybe that one
only become obvious in hindsight)
   * Bug supervisors can approve series-targeted bug task (#834082)

And the other half are really integration issues:

   * Changes in lp.client broke untested JS code (#830676)
   * Source package grows an interface, but a page registered on the
interface fails on that implementation (#798947)
   * A JS library was excluded from the build and code requiring it
fails (#808561)

But even in these cases, I don't think it would be far-fetched to expect
test coverage of these aspects.

So while I agree that 'better testing' can become an "apple-pie and
motherhood" statement, I think I've been somewhat conservative in my
approach. And what it shows is simply that there is still a lot of room
for improvements in our testing practices.

-- 
Francis J. Lacoste
francis.lacoste@xxxxxxxxxxxxx

Attachment: signature.asc
Description: OpenPGP digital signature


References