← Back to team overview

openstack-qa-team team mailing list archive

Re: Moving follow-up Unconference to 1:45 today

 

On 10/22/2012 06:26 PM, Jay Pipes wrote:
Hi Yaniv, answers inline...

On 10/22/2012 11:41 AM, Yaniv Kaul wrote:
On 10/22/2012 05:33 PM, Jay Pipes wrote:
Hi Sean :)

Here's a quick recap:

We agreed:

* nosetests just isn't a good foundation for our work -- especially
regarding performance/parallelism
Any proposed alternatives?
I am looking from time to time at
https://code.google.com/p/robotframework/ and wondering if it allow
faster test cases development
I don't like the behaviour in Robot Frameworks of creating test cases in
a tabular format:

http://robotframework.googlecode.com/hg/doc/userguide/RobotFrameworkUserGuide.html?r=2.7#id377

I vastly prefer code-based test cases.

I was thinking more of creating the libraries first ( http://robotframework.googlecode.com/hg/doc/userguide/RobotFrameworkUserGuide.html?r=2.7#creating-test-libraries) - for each project. Later, on top of the basic library, more complicated test scenarios could be written in higher level format (the tabular format, though it could be done in other ways as well). What I liked most about Robot was actually the ability later on, for non-developers, to add more complex scenarios on top of the basic flows developed.


The library we decided to take a look at was testtools, written in part
by Robert Collins, who is a member of the CI team working on OpenStack
stuff at HP:

http://testtools.readthedocs.org/en/latest/index.html

Ok - although it's not very well documented - http://testtools.readthedocs.org/en/latest/py-modindex.html

<http://testtools.readthedocs.org/en/latest/py-modindex.html>

In the past we've also looked at PyVows:

http://heynemann.github.com/pyvows/

as well as DTest:

https://github.com/klmitch/dtest

Yes, we've looked at dtest as well - and I believe it was fine.


Basically, the issues we have with nosetests are:

* It is very intrusive
* It doesn't handle module and class-level fixtures properly (see
https://github.com/nose-devs/nose/issues/551)
* Multiprocessing plugin is total fail
(https://github.com/nose-devs/nose/issues/550)

Anything that makes test code cleaner, with the ability to handle
fixtures cleanly, annotate dependencies between tests, and parallelize
execution effectively is fine in my book. :)

* We need to produce good, updated documentation on what different
categories of tests are -- smoke, fuzz, positive/negative, etc -- and
put this up on the wiki
Perhaps a naming convention to the test case names, such as
component_category_test_case?
For example:
compute_sanity_launchVM()
compute_negative_launchVM()
No, we use decorators to annotate different types of tests (grep for
@attr\(type= in Tempest. What we don't have is good basic documentation
on what we agree is an acceptance/smoke test and what isn't ;)

* We need to produce a template example test case for Tempest that
provides excellent code examples of how to create different tests in a
best practice way -- I believe David Kranz is going to work on this first?
* We need to get traceability matrixes done for the public APIs -- this
involves making a wiki page (or even something generated) that lists the
API calls and variations of those calls and whether or not they are
tested in Tempest
Wouldn't code coverage report be better?
If you call FunctionX(param1, param2) - and you've called it with 2
different param1 values and always the default in param2 - what does it
mean, from coverage perspective?
No, unfortunately code coverage doesn't work for functional integration
tests in the same way it does for unit tests, for a number of reasons:

1) Tempest executes a series of HTTP calls against public REST endpoints
in OpenStack. It has no way of determining what code was run **on the
server**. It only has the ability to know what Tempest itself executed,
not what percentage of the total API those calls represented

But you can get number of lines covered (vs. total number of lines in the component).


2) Specs don't always exist for the APIs -- yes, I know, this isn't
good. Especially problematic are some of the Compute API extensions that
aren't documented well, or at all.

which is why you need to look at the code, see what is missed (and 'reverse-engineer' it into a test case).

But I guess it doesn't have to be either-or and could be both eventually - I don't have a lot of faith in a wiki that needs updating, that's all.
Y.


Best,
-jay

Thanks,
Y.

* I will start the traceability matrix stuff and publish for people to
go and update
* Antoni from HP is going to investigate using things in testtools and
testrepository for handling module and package-level fixtures and
removing some of the nose-based cruft
* I will be working on the YAML file that describes a test environment
so that different teams that use different deployment frameworks can use
a commonly-agreed format to describe their envs to the CI system
* A new member of Gigi's team (really sorry, I've forgotten your name!
:( ) is going to look at the fuzz testing discussion from earlier and
see about prototyping something together that would be used for negative
and security/fuzz testing -- this would enable us to remove the static
negative tests from Tempest's main test directories. For the record (and
for the team member whose name I have forgotten, here is the relevant
link: https://lists.launchpad.net/openstack-qa-team/msg00155.html and
https://lists.launchpad.net/openstack-qa-team/msg00156.html)

Best,
-jay

On 10/19/2012 02:47 PM, Sean Gallagher wrote:
Daryl,
I wasn't able to make the follow-up meeting. :/

Can you/ someone send out a recap?

David: you had a list of items. Can you post or share that somewhere?

We discussed Blueprints for managing some of the planning work.

What about higher level planning docs? Useful? Do we have any? Should we?

Re Google Hangout next week, I'm interested.

-sean



Follow ups

References