I'm certainly all for anything that makes things easier. However, I do want to make sure that if we migrate runners, we should make sure that the new implementation solves all the issues we're trying to address.
Daryl
________________________________________
From: openstack-qa-team-bounces+daryl.walleck=rackspace.com@xxxxxxxxxxxxxxxxxxx [openstack-qa-team-bounces+daryl.walleck=rackspace.com@xxxxxxxxxxxxxxxxxxx] on behalf of David Kranz [david.kranz@xxxxxxxxxx]
Sent: Thursday, September 27, 2012 8:11 PM
To: openstack-qa-team@xxxxxxxxxxxxxxxxxxx
Subject: Re: [Openstack-qa-team] PyVows proof of concept
We discussed this a bit at the meeting today. Monty has proposed a
session on the QA track about parallelizing some of the CI stuff. He
believes tempest could share the parallelization code. See
http://summit.openstack.org/cfp/details/69.
Parallelizing the tempest gate job is as much of a ci issue as a tempest
issue and working with them, and their proposal, could make things much
easier for us IMO.
-David
On 9/27/2012 8:10 PM, Daryl Walleck wrote:
I agree on the issue with the output from generated tests. That is troublesome, but from what I've seen in the source code, probably something that could be remedied. It's also very generous in it's parallel execution which is fine client-side, but can overwhelm a test environment since there's no configuration to throttle back the number of tests being executed at a time. Unfortunately I haven't seen a Python test runner that meets all the criteria that I'd like to have, thus this and other little proof of concepts I've been tossing around to see if any better approaches are out there.
Daryl
________________________________________
From: openstack-qa-team-bounces+daryl.walleck=rackspace.com@xxxxxxxxxxxxxxxxxxx [openstack-qa-team-bounces+daryl.walleck=rackspace.com@xxxxxxxxxxxxxxxxxxx] on behalf of Jaroslav Henner [jhenner@xxxxxxxxxx]
Sent: Monday, September 24, 2012 7:28 AM
To: openstack-qa-team@xxxxxxxxxxxxxxxxxxx
Subject: [Openstack-qa-team] PyVows proof of concept
In reply to:
https://lists.launchpad.net/openstack-qa-team/msg00236.html, which
didn't came to my mailbox for some reason (attachment?)
I tried pyVows myself. I kinda liked the concept, but I didn't like the
way it is reporting to JUnit format XML when using "generative testing":
http://heynemann.github.com/pyvows/#-using-generative-testing
In Jenkins, it looked like:
Test Result : Add
-----------------
should_be_numeric 0 ms Passed
should_be_numeric 0 ms Passed
should_be_numeric 0 ms Passed
should_be_numeric 0 ms Passed
should_be_numeric 0 ms Passed
should_be_numeric 0 ms Passed
The parameters to the testing method are important when using generative
testing, so I think they should be included in the name of the test. But
some funny characters like
()%* I don't remember which
are causing problems in Jenkins. I was investigating some problems with
them months ago with some other testing framework. I don't know how to
address this problem. It may be worthy to consider making some Robot
framework outputs plugin if generative testing is needed, or use Robot
Framework
https://wiki.jenkins-ci.org/display/JENKINS/Robot+Framework+Plugin
J.H.
--
Mailing list: https://launchpad.net/~openstack-qa-team
Post to : openstack-qa-team@xxxxxxxxxxxxxxxxxxx
Unsubscribe : https://launchpad.net/~openstack-qa-team
More help : https://help.launchpad.net/ListHelp
--
Mailing list: https://launchpad.net/~openstack-qa-team
Post to : openstack-qa-team@xxxxxxxxxxxxxxxxxxx
Unsubscribe : https://launchpad.net/~openstack-qa-team
More help : https://help.launchpad.net/ListHelp