← Back to team overview

openstack-qa-team team mailing list archive

Re: Running tempest in parallel

 

Well, if by complete failure you mean not configured, you're correct. :-) The cleanup solution has several parallel solutions in progress, and should be a coding standard going forward. The quotas issue is a matter of configuration the devstack environment. This is a matter of if we should expect Tempest to pass out of the box with any Devstack configuration, which may not be realistic. I know there was an issue not too far back where some tests related to volumes would not pass with the default Devstack config because there wasn't enough space allocated for volumes. However, I think we should be able to make some sensible default suggestions that should run in most "basic" environments.

 At the development conference we had a discussion about how best to proceed with parallel execution. The general consensus was:


  *   Switching to py.test isn't a popular idea
  *   Folks would rather improve nose's parallel capabilities than develop another solution

 I haven't tinkered with nose in the last few weeks, but I think it may be possible to simply run the Tempest with py.test without any modifications. This still wouldn't be a popular solution regardless, so let's go back to the problem we are trying to solve. I think we can all agree that we would like a series of decisive smoke tests that run in what is perceived to be a reasonable amount of time. So what is the bar we're trying to reach? My personal feelings is that the 10-15 minute range is more than reasonable using a full Linux distro, which means with Cirros we should be able to squeak by under the 10 minute mark. What's everyone else's feelings on smoke test pain point times? I think getting a sense of that will make any further decisions a bit more clear.

Daryl

On May 29, 2012, at 5:16 PM, Jay Pipes wrote:

On 05/29/2012 04:33 PM, David Kranz wrote:
Can any one say what the current status is of running in parallel reliably?

Sure. It doesn't work. :)

You first run into quota issues and then you run into resource cleanup issues. It's a complete failure...

-jay

--
Mailing list: https://launchpad.net/~openstack-qa-team
Post to     : openstack-qa-team@xxxxxxxxxxxxxxxxxxx<mailto:openstack-qa-team@xxxxxxxxxxxxxxxxxxx>
Unsubscribe : https://launchpad.net/~openstack-qa-team
More help   : https://help.launchpad.net/ListHelp


Follow ups

References