← Back to team overview

openstack-qa-team team mailing list archive

Format of Tempest Tests

 

Hi team,

We've had some discussions on the format of the Tempest tests over the last few weeks, so I wanted to throw out a few examples to see if we can find a format that we are all comfortable and agree with. I think it's safe to say we all have at least the following goals:


  *   We want readable tests with clear goals
  *   We want the output of our tests to be meaningful
  *   When a test fails, the output of that test should be clear on what failed and why
  *   In the near future, parallel test execution is desired to reduce test suite execution time

To help illustrate the possibilities, I've made a few snippets of code for one test to use as an example. To start, this is a test we have today (create server): http://pastebin.com/WyHGdTjT

The design is straightforward: use of a setup fixture to prepare any general data for the class, and a single test to execute one scenario with all desired assertions contained within. This test can be run in parallel with others with no issue. However, point that was brought up two weeks ago was that some teams wanted more details about what was being tested and what assertions are being made. This design will also fail early (for example, if an assertion on the initial create response failed), which means the later assertions will not run. From my understanding, an alternate solution similar to the following was proposed: http://pastebin.com/1zPFAeEm

In short, I re-factored this single test such that each logical  set of assertions becomes it's own test. Also, The premise is that these test would run in exactly the order they were coded, thus creating a dependency chain. The benefits I can see from this approach are that any one individual failure would not stop other tests from failing, as well a somewhat more granular approach to reporting of successes. However, I can see two problems with this approach. First, this test class cannot run its test in parallel. Since each test is dependent on the state created before it, only class level parallelization would be possible. Another more subtle issue comes from the dependencies themselves. Due to this structure, if I only need to run the 6th test in a class, I still must execute every preceding test to get the system into a state so that the desired test can be run. In the trivial example the time required is minimal, but in larger examples can easily become substantial.

I spent some time thinking through a solution that could provide the desired results of this previous approach yet avoid dependencies. The following is what I would like to propose: http://pastebin.com/2pdV34Ph

The difference is very subtle: moving of the initial create request into the class setup and storing the response. After the server is created, the system (as far as this test is concerned) is at rest, so any assertions or checks can be made as long as they do not change the expected state of the resources under test. In this case, this means the delete test had to be removed and would have to be tested separately. However, other than that, the order of the remaining tests no longer matter, any any number of tests based on the initial response or current state of the server can be tested all at the same time. I feel like this approach provides the desired amount of feedback while still maintaining a minimal execution time.

I've put this together based on my understanding of the conversations we've had over the last few weeks. The goal was to provide some concrete examples of what we have discussed abstractly. I'd really love to hear some feedback from all of you on this topic, since it will have a large impact on development going forward.

Daryl