← Back to team overview

ubuntu-bugcontrol team mailing list archive

Re: Defect analysis / Test escape analysis question

 

On 04/10/11 11:05, Gema Gomez wrote:
> Dear all,
> 
> I am preparing a blueprint for UDS and would like to gather your
> experience and minds to help me get it right, based on your experience
> regarding defect control. I am a member of the QA Team.
> 
> We are trying to improve our testing and are going to need help
> prioritizing the areas that are in worse shape in terms of test
> coverage. In the past, I have always started such task assessing the
> quality of a system by looking at the defect database and trying to do
> some analysis that helped me understand where we are and where most of
> the problems live. We'll also need to measure code coverage, but that is
> a conversation to be had outside of this mailing list.
> 
> In a nutshell, I need to come up with a classification strategy that
> makes sense to be able to look at clusters of defects and figure out why
> they are there and if QA can do anything to improve on those areas.
> 
> The usual questions to be answered for each defect would be: was this
> found by a test case? Is the test case automated? if not, can it be
> automated? If there wasn't a test case, could we add one for it?
> automated? In which phase of the release cycle was it found? In which
> phase of the release cycle could have been found? Is it worth automating
> (i.e. not a marginal case or strange HW set up)? etc.
> 
> This is the blueprint as it stands:
> https://blueprints.launchpad.net/ubuntu/+spec/qa-metrics
> 
> In the test escape analysis and defect analysis front, I have this so far:
> 
>    * Test escape analysis and defect analysis
>         # of bugs that escaped our testing (and of these, how many
> should have been found by testing, automated or manual/adhoc?)
>         # of bugs found per month during our testing activities
>     # of regressions found by our test suites: this is meant to measure
> the test cases effectiveness, we could keep stats of defects found /
> test case, to know which test cases are the most effective at finding
> defects and be able to write more of those.
> 
> 
> This kind of analysis/classification tends to be mostly manual at the
> beginning and then going forward we can add tags as we deem appropriate
> if we want to continue tracking any of the classes... we would like to
> come up with a way of automating it as much as we can so that we can
> concentrate on the test-facing bits, which are extracting the
> information from the raw data. We have probably tags on the bugs that
> could help with the classification.
> 
> So, based on your experience, how would you guys go about this task, do
> we have enough information in launchpad already or do we need to tag
> defects in a different way to allow us to do that? I don't mind reading
> through a long list of defects and classifying them but I am so lost
> that I wouldn't even know where to start extracting defects or which
> list would be optimal set of defects to get started.
> 
> This information is instrumental to get our testing right on track, so
> any help, tips, new ideas about questions to be add, would be much
> appreciated. Please, copy me on your replies, as I am not a member of
> bug control and therefore I am not subscribed to the list.
> 
> Best Regards,
> Gema

Thanks for all the feedback, I will see what I can do to accommodate all
the requests!

Best Regards,
Gema


References