← Back to team overview

openstack team mailing list archive

Re: [QA] openstack-integration-tests

 

That's great. We are in the process of creating a Jenkins slave to run on a collection of servers we have. I think this would be good for some black box tests but I have been having some problems with nova failures. At the design summit I thought I heard that there were such tests but they were not currently working. Is there an existing black box test for a nova cluster that passes, using diablo and/or trunk, that I could try running?

David

On 10/10/2011 11:52 AM, Gabe Westmaas wrote:
I'd like to try to summarize and propose at least one next step for the content of the openstack-integration-tests git repository. Note that this is only about the actual tests themselves, and says absolutely nothing about any gating decisions made in other sessions.

First, there was widespread agreement that in order for an integration suite to be run in the openstack jenkins, it should be included in the community github repository.

Second, it was agreed that there is value in having tests in multiple languages, especially in the case where those tests add value beyond the base language. Examples of this may include testing using another set of bindings, and therefore testing the API. Using a testing framework that just takes a different approach to testing. Invalid examples include implanting the exact same test in another language simply because you don't like python.

Third, it was agreed that there is value in testing using novaclient as well as httplib2. Similarly that there is value in testing both XML and JSON.

Fourth, for black box tests, any fixture setup that a suite of tests requires should be done via script that is close to but not within that suite -- we want tests to be as agnostic to an implementation of openstack as possible, and anything you cannot do from the the API should not be inside the tests.

Fifth, there are suites of white box tests -- we understand there can be value here, but we aren't sure how to approach that in this project, definitely more discussion needed here. Maybe we have a separate directory for holding white and black box tests?

Sixth, no matter what else changes, we must maintain the ability to run a subset of tests through a common runner. This can be done via command line or configuration, whichever makes the most sense. I'd personally lean towards configuration with the ability to override on the command line.

If you feel I mischaracterized any of the agreements, please feel free to say so.

Next, we want to start moving away from the multiple entry points to write additional tests. That means taking inventory of the tests that are there now, and figuring out what they are testing, and how we run them, and then working to combine what makes sense, into a directory structure that makes sense. As often as possible, we should make sure the tests can be run in the same way. I started a little wiki to start collecting information. I think a short description of the general strategy of each suite and then details about the specific tests in that suite would be useful.

http://wiki.openstack.org/openstack-integration-test-suites

Hopefully this can make things a little easier to start contributing.

Gabe
This email may include confidential information. If you received it in error, please delete it.


_______________________________________________
Mailing list: https://launchpad.net/~openstack
Post to     : openstack@xxxxxxxxxxxxxxxxxxx
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


References