openstack team mailing list archive
-
openstack team
-
Mailing list archive
-
Message #10531
Re: [OpenStack][Nova] Minimum required code coverage per file
On 04/24/2012 10:08 PM, Lorin Hochstein wrote:
>
> On Apr 24, 2012, at 4:11 PM, Joe Gordon wrote:
>
>> Hi All,
>>
>> I would like to propose a minimum required code coverage level per
>> file in Nova. Say 80%. This would mean that any new feature/file
>> should only be accepted if it has over 80% code coverage. Exceptions
>> to this rule would be allowed for code that is covered by skipped
>> tests (as long as 80% is reached when the tests are not skipped).
>>
>
> I like the idea of looking at code coverage numbers. For any particular
> merge proposal, I'd also like to know whether it increases or decreases
> the overall code coverage of the project. I don't think we should gate
> on this, but it would be helpful for a reviewer to see that, especially
> for larger proposals.
Yup... Nati requested this a couple of summits ago - main issue is that
while we run code coverage and use the jenkins code coverage plugin to
track the coverage numbers, the plugin doesn't fully support this
particular kind of report.
HOWEVER - if any of our fine java friends out there want to chat with me
about adding support to the jenkins code coverage plugin to track and
report this, I will be thrilled to put it in as a piece of reported
information.
>> With 193 python files in nova/tests, Nova unit tests produce 85%
>> overall code coverage (calculated with ./run_test.sh -c [1]). But 23%
>> of files (125 files) have lower then 80% code coverage (30 tests
>> skipped on my machine). Getting all files to hit the 80% code
>> coverage mark should be one of the goals for Folsom.
>>
>
> I would really like to see a visualization of the code coverage
> distribution, in order to help spot the outliers.
>
>
> Along these lines, there's been a lot of work in the software
> engineering research community about predicting which parts of the code
> are most likely to contain bugs ("fault prone" is a good keyword to find
> this stuff, e.g.: http://scholar.google.com/scholar?q=fault+prone, big
> names include Nachi Nagappan at MS Research and Elaine Weyuker, formerly
> of AT&T Research). I would *love* to see some academic researchers try
> to apply those techniques to OpenStack to help guide QA activities by
> identifying which parts of the code should get more rigorous testing
> and review.
++
Follow ups
References