openstack-qa-team team mailing list archive
-
openstack-qa-team team
-
Mailing list archive
-
Message #00155
Re: Thoughts on input fuzzing tests
On 06/12/2012 02:22 PM, Daryl Walleck wrote:
Due to the large number of input fuzzing tests that have been submitted, I've been thinking of ways to reduce the amount of code needed to achieve this (whether we should do it or not is a totally different discussion). Rather than have x number of input tests for say create server, wouldn't it be far easier to have a single create server fuzz test which is data driven and accepts the desired inputs and the expected exception. So instead of this (pseudo-coded things up a bit):
https://gist.github.com/2919066
we could get the same effect with much less code by doing this:
https://gist.github.com/2919177
Regardless of implementation, I think the general idea of moving this type of testing towards data driven functions would really help cut down on redundant code.
Fuzz testing I believe is better done using a tool like randgen [1]
The basic strategy is to have a grammar document that describes the API
and then have a fuzz tester hammer the API with random bad and good
data, recording responses.
I've cc'd Patrick Crews, who is an expert on randgen and also works on
the OpenStack CI team, to see if he'd be interested in participating in
putting together a randgen grammar for OpenStack components and working
on some fuzz testing stuff in Tempest...
The simple fact is that the more "negative" tests we add to Tempest's
test suite, the longer Tempest is taking to run, and we are getting
diminishing returns in terms of the benefit of each added negative test
compared with the additional time to test. I think having a separate
fuzz testing suite that uses a grammar-based test will likely produce us
better negative API test coverage without having to write single test
methods for every conceivable variation of a bad request to an API.
Best,
-jay
[1] https://launchpad.net/randgen
Follow ups
References