← Back to team overview

openstack-qa-team team mailing list archive

Re: Thoughts on input fuzzing tests

 

Hi folks

I'm definitely interesting about this topic!

>> The simple fact is that the more "negative" tests we add to Tempest's
>> test suite, the longer Tempest is taking to run, and we are getting
>> diminishing returns in terms of the benefit of each added negative test
>> compared with the additional time to test. I think having a separate
>> fuzz testing suite that uses a grammar-based test will likely produce us
>> better negative API test coverage without having to write single test
>> methods for every conceivable variation of a bad request to an API.
>
>
> ++ to this.  In the database world, Microsoft's SQL server team threw in the
> towel on manual testing as a base - it is too expensive to generate and
> validate (and maintain) such tests...building a bank of combinations that
> can be automatically executed and validated is the way to go when we have so
> much ground to cover.

++

Negative case test is needed to accomplish production quality, however
it costs many...

Actually, We (NTT)  submitted many negative case for tempest.
We documented internal state machine of nova, and we tested whole
pattern of state combination.
It was pretty hard work...
If there are any efficient way, I wanna try to use that.

> Looking forward to chatting with anyone who is interested in this.  Will be
> wrapping up some tasks this week and will be able to dig into this seriously
> next week.  However, anyone can ping me in IRC (pcrews)/ email if they'd
> like to discuss / explore this further.

> Thanks for bringing this up, Jay!
>
> Cheers,
> patrick
>
>>
>> Best,
>> -jay
>>
>> [1] https://launchpad.net/randgen
>>
>>
>
>
> --
> Mailing list: https://launchpad.net/~openstack-qa-team
> Post to     : openstack-qa-team@xxxxxxxxxxxxxxxxxxx
> Unsubscribe : https://launchpad.net/~openstack-qa-team
> More help   : https://help.launchpad.net/ListHelp


References