← Back to team overview

testrepository-dev team mailing list archive

Re: [Yellow] testrepository and test filter creation

 

On 04/29/2012 10:39 PM, Robert Collins wrote:
> So, I think testrepositories internals have gotten a little foggy
> recently, and I wanted to understand what we're trying to do.

Thank you for thinking about this Robert.

I do appreciate that you are very busy. That said, from my perspective, your timing, combined with what I understand to be your conclusion to discard large chunks of the work done over the past two weeks, is unfortunate and counterproductive.

As of this past Friday, the yellow squad has all the associated work from the past two weeks, combined with Jono's subunit-filter tag branch [1] and some small tweaks to get it to work [2], installed in our PPA and working nicely. We have integrated it into buildbot, and we have live test and failure counts, worker test logs, nicely formatted failure logs that have Python failure format but include the worker tag information, and a clean subunit output. The buildbot setup is a much nicer tool now practically, in particular for diagnosis.

We've greatly appreciated Jono's very active help to get this working.

We will proceed with our working code over the short term, and possibly in the medium term. I'll participate in this thread to make sure we are on the same page in terms of goals, but I do not currently intend to spend more time in the squad on rework of a job that has already taken more time than I would like.

> The
> attached image (please excuse the grot on it - turns out my camera
> finds every spec of ever used whiteboard marker on my whiteboard, and
> puts it into the image - I cleaned it up as much as I had patience to
> do, before compressing it to a usable size) lays this out.
>
> In english:
>   - we want to combine all the worker threads,

(with the worker tags inserted, as they are now in trunk)

> then split the resulting
> stream before any filtering takes place. The repository wants a copy,
> which it will store verbatim. This permits diagnostics etc to be done
> on nearly-raw data.
>   - the UI wants the rest of it, but before the UI sees it we want to
> do some global processing on it and strip out the 'layer X started'
> tests (probably IFF they were pass / skips).

Stripping out the successful zope:layer-tagged faux tests is fine but unnecessary from our buildbot perspective. subunit-filter does a good job of this sort of thing, thanks to jml's branch [1], and knowledge of "zope:layer" seems to belong in user configuration/command line space to me. This is how we have it configured now.

Stripping out the non-successful zope:layer would be a step backwards. We should have access to a subunit stream that includes them.

> LP doesn't want them
> counted towards test runs, and other projects may have similar needs.

In buildbot, our counts are done from the filtered subunit stream, and we get what we want.

On the command-line, yes, we asked in January for testrepository to give us counts that ignore zope:layer tests. We don't see that much anymore because we run all of our parallel tests in buildbot, but I do still feel that it will unnecessarily confuse developers to see their test counts vary seemingly randomly across runs.

I do think that such a feature should affect counts only: the subunit stream should include layer failures, and when they are included, they should not affect count.

> All UIs should have this done to the stream, so its not a UI specific
> thing.
>   - once the CLI UI has it, it splits it again, to
>    * count the tests (for the summary)
>    * time the run (for the summary) - but this perhaps comes from the
> repository by querying the last run, so arguably ignorable. Note that
> this should not be 'sum of test times' but 'duration in stream' or
> something similar (because with two streams we want the wall time, not
> the sum of time per CPU).

These are important to the developer story, but not to the CI story, at least as we have configured it.

> * and we want to send the stream to stdout as subunit when --subunit is given

Yes.  This is critical to us.

>    * or pretty print it (when --full-results is given)

That's not what --full-results means in the trunk-based branches we've been using.

In trunk, the default case is pretty printing. If you ask for "--subunit" then that is asking for subunit output (as opposed to pretty printing).

As a user, this seems like the right distinction to me. testrepository devs can decide what they want, and we will eventually adjust if necessary.

In trunk, testrepository only shows failures by default. "--full-results" means "show successes too", like "-s"/"--success" in subunit-filter. AIUI it is supposed to affect pretty printing and --subunit output the same way.

I think trunk's arrangement makes more sense, though perhaps "--full-results" ought to be normalized to "--success" like subunit-filter if they are truly equivalent.

>    * or filter it and then pretty print it (default) - note that there
> is a missing box/arrow for this case; I've drawn it in here, you can
> imagine it.
>
> If I've got things wrong, please let me know, otherwise this should
> provide guidance for where/how to plug things in, and what structure
> to be aiming for in the code.

I think that if we have a --subunit flag as you've described it that at least includes zope:layer tagged output for layer failures and errors then we will be able to work with the end result, whatever it is. We'll leave it to testrepository devs to hash out the rest of the particulars.

Gary

[1] https://code.launchpad.net/+branch/~jml/subunit/filter-tags
[2] https://code.launchpad.net/~yellow/subunit/test-count/+merge/103717 , which we could replace with https://code.launchpad.net/~yellow/subunit/on_filter but won't bother, under the circumstances of changing sands.


Follow ups

References