← Back to team overview

openstack team mailing list archive

Re: [QA] openstack-integration-tests

 

Hello Stackers,

I was at the design summit and the sessions that were 'all about QA' and had shown my interest in supporting this effort. Sorry I could not be present at the first QA IRC meeting due to a vacation.
I had a chance to eavesdrop at the meeting log and Nachi-san also shared his account of the outcome with me. Thanks Nachi!

Just a heads up to put some of my thoughts on ML before today's meeting.
I had a look at the various (7 and counting??) test frameworks out there to test OpenStack API.
Jay, Gabe and Tim put up a neat wiki (http://wiki.openstack.org/openstack-integration-test-suites) to compare many of these.

I looked at Lettuce<https://github.com/gabrielfalcao/lettuce> and felt it was quite effective. It's incredibly easy to write tests once the wrappers over the application are setup. Easy as in "Given a ttylinux image create a Server" would be how a test scenario would be written in a typical .feature file, (which is basically a list of test scenarios for a particular feature) in a natural language. It has nose support, and there's some neat documentation<http://lettuce.it/index.html> too. I was just curious if anyone has already tried out Lettuce with OpenStack? From the ODS, I think the Grid Dynamics guys already have their own implementation. It would be great if one of you guys join the meeting and throw some light on how you've got it to work.
Just for those who may be unaware, Soren's branch openstack-integration-tests<https://github.com/openstack/openstack-integration-tests> is actually a merge of Kong and Stacktester.

The other point I wanted to have more clarity on was on using both novaclient AND httplib2 to make the API requests. Though <wwkeyboard> did mention issues regarding spec bug proliferation into the client, how can we best utilize this dual approach and avoid another round of duplicate test cases. Maybe we target novaclient first and then the use httplib2 to fill in gaps? After-all novaclient does call httplib2 internally.

I would like to team up with Gabe and others for the unified test runner task. Please chip me in if you're doing some division of labor there.

Thanks!
Rohit

(NTT)
From: openstack-bounces+rohit.karajgi=vertex.co.in@xxxxxxxxxxxxxxxxxxx [mailto:openstack-bounces+rohit.karajgi=vertex.co.in@xxxxxxxxxxxxxxxxxxx] On Behalf Of Gabe Westmaas
Sent: Monday, October 10, 2011 9:22 PM
To: openstack@xxxxxxxxxxxxxxxxxxx
Subject: [Openstack] [QA] openstack-integration-tests

I'd like to try to summarize and propose at least one next step for the content of the openstack-integration-tests git repository.  Note that this is only about the actual tests themselves, and says absolutely nothing about any gating decisions made in other sessions.

First, there was widespread agreement that in order for an integration suite to be run in the openstack jenkins, it should be included in the community github repository.

Second, it was agreed that there is value in having tests in multiple languages, especially in the case where those tests add value beyond the base language.  Examples of this may include testing using another set of bindings, and therefore testing the API.  Using a testing framework that just takes a different approach to testing.  Invalid examples include implanting the exact same test in another language simply because you don't like python.

Third, it was agreed that there is value in testing using novaclient as well as httplib2.  Similarly that there is value in testing both XML and JSON.

Fourth, for black box tests, any fixture setup that a suite of tests requires should be done via script that is close to but not within that suite - we want tests to be as agnostic to an implementation of openstack as possible, and anything you cannot do from the the API should not be inside the tests.

Fifth, there are suites of white box tests - we understand there can be value here, but we aren't sure how to approach that in this project, definitely more discussion needed here.  Maybe we have a separate directory for holding white and black box tests?

Sixth, no matter what else changes, we must maintain the ability to run a subset of tests through a common runner.  This can be done via command line or configuration, whichever makes the most sense.  I'd personally lean towards configuration with the ability to override on the command line.

If you feel I mischaracterized any of the agreements, please feel free to say so.

Next, we want to start moving away from the multiple entry points to write additional tests.  That means taking inventory of the tests that are there now, and figuring out what they are testing, and how we run them, and then working to combine what makes sense, into a directory structure that makes sense.  As often as possible, we should make sure the tests can be run in the same way.  I started a little wiki to start collecting information.  I think a short description of the general strategy of each suite and then details about the specific tests in that suite would be useful.

http://wiki.openstack.org/openstack-integration-test-suites

Hopefully this can make things a little easier to start contributing.

Gabe
This email may include confidential information. If you received it in error, please delete it.

Follow ups

References