← Back to team overview

openstack team mailing list archive

Re: [OpenStack][Nova] Minimum required code coverage per file

 

One concern I have is this: suppose we find that a code block is
unnecessary, or can be refactored more compactly, but it has test coverage.
 Then removing it would make the % coverage fall.

We want to remove the code, but we'd have to add unrelated tests to the
same merge because otherwise the test coverage % would fall?

I think we can certainly enhance the metrics, but I do have concerns over
strict gating (particularly per file, where the problem is more likely to
occur than per-project)

Maybe the gate could be that line count of uncovered lines must not
increase, unless the new % coverage > 80%.

Or we could simply have a gate bypass.

Justin

On Wed, Apr 25, 2012 at 2:45 PM, Monty Taylor <mordred@xxxxxxxxxxxx> wrote:

> Hey - funny story - in responding to Justin I re-read the original email
> and realized it was asking for a static low number, which we _can_ do -
> at least project-wide. We can't do per-file yet, nor can we fail on a
> downward inflection... and I've emailed Justin about that.
>
> If we have consensus on gating on project-wide threshold, I can
> certainly add adding that to the gate to the todo list. (If we decide to
> do that, I'd really like to make that be openstack-wide rather than just
> nova... although I imagine it might take a few weeks to come to
> consensus on what the project-wide low number should be.
>
> Current numbers on project-wide lines numbers:
>
> nova: 79%
> glance: 75%
> keystone: 81%
> swift: 80%
> horizon: 91%
>
> Perhaps we get nova and glance up to 80 and then set the threshold for 80?
>
> Also, turns out we're not running this on the client libs...
>
> Monty
>
> On 04/25/2012 03:53 PM, Justin Santa Barbara wrote:
> > If you let me know in a bit more detail what you're looking for, I can
> > probably whip something up.  Email me direct?
> >
> > Justin
> >
> >
> > On Wed, Apr 25, 2012 at 6:59 AM, Monty Taylor <mordred@xxxxxxxxxxxx
> > <mailto:mordred@xxxxxxxxxxxx>> wrote:
> >
> >
> >
> >     On 04/24/2012 10:08 PM, Lorin Hochstein wrote:
> >     >
> >     > On Apr 24, 2012, at 4:11 PM, Joe Gordon wrote:
> >     >
> >     >> Hi All,
> >     >>
> >     >> I would like to propose a minimum required code coverage level per
> >     >> file in Nova.  Say 80%.  This would mean that any new feature/file
> >     >> should only be accepted if it has over 80% code coverage.
>  Exceptions
> >     >> to this rule would be allowed for code that is covered by skipped
> >     >> tests (as long as 80% is reached when the tests are not skipped).
> >     >>
> >     >
> >     > I like the idea of looking at code coverage numbers. For any
> >     particular
> >     > merge proposal, I'd also like to know whether it increases or
> >     decreases
> >     > the overall code coverage of the project. I don't think we should
> gate
> >     > on this, but it would be helpful for a reviewer to see that,
> >     especially
> >     > for larger proposals.
> >
> >     Yup... Nati requested this a couple of summits ago - main issue is
> that
> >     while we run code coverage and use the jenkins code coverage plugin
> to
> >     track the coverage numbers, the plugin doesn't fully support this
> >     particular kind of report.
> >
> >     HOWEVER - if any of our fine java friends out there want to chat
> with me
> >     about adding support to the jenkins code coverage plugin to track and
> >     report this, I will be thrilled to put it in as a piece of reported
> >     information.
> >
> >     >> With 193 python files in nova/tests, Nova unit tests produce 85%
> >     >> overall code coverage (calculated with ./run_test.sh -c [1]).
> >      But 23%
> >     >> of files (125 files) have lower then 80% code coverage (30 tests
> >     >> skipped on my machine).  Getting all files to hit the 80% code
> >     >> coverage mark should be one of the goals for Folsom.
> >     >>
> >     >
> >     > I would really like to see a visualization of the code coverage
> >     > distribution, in order to help spot the outliers.
> >     >
> >     >
> >     > Along these lines, there's been a lot of work in the software
> >     > engineering research community about predicting which parts of the
> >     code
> >     > are most likely to contain bugs ("fault prone" is a good keyword
> >     to find
> >     > this stuff, e.g.: http://scholar.google.com/scholar?q=fault+prone,
> big
> >     > names include Nachi Nagappan at MS Research and Elaine Weyuker,
> >     formerly
> >     > of AT&T Research). I would *love* to see some academic researchers
> try
> >     > to apply those techniques to OpenStack to help guide QA activities
> by
> >     > identifying which parts of the code should get more rigorous
>  testing
> >     > and review.
> >
> >     ++
> >
> >     _______________________________________________
> >     Mailing list: https://launchpad.net/~openstack
> >     Post to     : openstack@xxxxxxxxxxxxxxxxxxx
> >     <mailto:openstack@xxxxxxxxxxxxxxxxxxx>
> >     Unsubscribe : https://launchpad.net/~openstack
> >     More help   : https://help.launchpad.net/ListHelp
> >
> >
>

Follow ups

References