← Back to team overview

openstack-qa-team team mailing list archive

Implementing tests

 

Hi,

My nick on IRC meeting is AntoniHP and I'm responsible in HP for testing in
automated ways Nova components. I voiced some concerns I have with where
Tempest is going. They are rooted in fact that HP Cloud Services is large
organization and we need automated processes to create high quality
information - it makes a difference if test result is FAIL, ERROR or SKIP. 

First problem is the structuring of tests, which are currently run using
nosetests - that is a bit of problem, as this framework was created mostly
for unit testing. There are two problems that plain nosetests do not take
care for us:
1) Dependability of tests on each other - eg. Testing volume attachmentso
does not make sense and produce meaningful results when volumes cannot be
created - thus we need tests that depend on failed to SKIP rather than FAIL
or ERROR

2) Higher resolution in reporting - if we test ability to create VM, it is
essential that test would indicate that VM did not create, or it did but
metadata was not passed correctly, or API just send malformed response.

3) Sharing resources - sometimes we need to have costly resources to be
brought up, eg. To list servers we need to create a number of them, and we
can safely test multiple cases of list servers filters with same server
list.

For that reason it makes sense to make a test case into class of many tests,
for example something like that:
Class ServerCreate(unittest.TestCase):
	Def test_001_make_rest_call:
		#rest call happens here
	Def test_002_verify_response:
		#verify response
		#or skip is test_001 did not happen
	Def test_003_verify_if_server_created:
		#verify ifserver is indeed there
		#or skip is test_001 did not happen

This clearly breaks idea of nosetests, as those tests are heavily
interdependent on each other - but gives us three different results rather
than having this procedure as one test. If that is very bad we can implement
some plugin to nose to take care of such tests, implement tests as above,
use test generators or abandon idea of using unittest libraries for not unit
tests. I would opt for third option as it is pretty clear to me, above test
would look like that:

Class ServerCreate(unittest.TestCase):
	Def test_main_routine:
		yield 001_make_rest_call
		if self.rest_call_response:
			yield 002_verify_response
			yield 003__verify_if_server_created,
self.server_name
	Def 001_make_rest_call:
		#rest call happens here
	Def 002_verify_response:
		#verify response
	Def 003_verify_if_server_created:
		#verify ifserver is indeed there

This also means that classmethods are no longer needed, as setup and
teardown would be executed once per test generator. It also brings advantage
of storing test logic in separation of test execution.


Second problem is about being explicit in API tests. I think just like Kong
was the API tests should be very explicit and in test code avoid using
helper classes or other kinds of code refactoring. Basically every call
should be specified as set of headers and body, something like:
		    path = "/servers?flavor=%s" %self.flavor_id
                response_header, response_data = som_rest_client.get(path )
That should only apply to test code, and setup and teardown routines should
a proper code with being as readable and simple as possible, which brings us
to third issue of code refactoring.

Our object is to test an openstack environment, and for everything else we
should use code already written. Some classes I see in Tempest are
functionally very similar to novaclient classes, and actually REST client is
very similar to one implemented in novaclient library. Just like we use nose
library, and unittest library and maybe in future parmiko for ssh, why not
follow this idea of using opensource solutions. If novaclient implements
creating servers, why do it on own? On last meeting it was mentioned to
novaclient does not implement some features(XML code AFAIR), but then again
- idea of sharing code it to contribute missing features to novaclient
source code. This library already contains REST client with various schemes
for authentication and wide array of functions that wrap around Nova
functionality. For example to make a test to list, much simpler is to make
one call to servers.create function, then implementing own wrappers.


Thanks,
Antoni Dabek

Attachment: smime.p7s
Description: S/MIME cryptographic signature