openstack team mailing list archive
-
openstack team
-
Mailing list archive
-
Message #03407
Re: Nova D3 Milestone and the skipped tests
On Tue, Aug 2, 2011 at 6:40 AM, Soren Hansen <soren@xxxxxxxxxxx> wrote:
> 2011/7/27 Trey Morris <trey.morris@xxxxxxxxxxxxx>:
> > I'm fairly certain that in this particular case, without support from the
> > hypervisor and api lieutenants, and a testing guru or two, I would still
> be
> > trying to fit all the pieces together.
>
> On any reasonably large software project, sometimes making changes to
> the code isn't just about making changes to the code. Sometimes it
> involves pestering people to help you out if there are things you
> can't work out yourself.
>
But at what cost? How long should we delay progress in one area, in this
case networking, to make sure *everything* works even if it doesn't
necessarily need to right now as we're no where near a release? We need to
be able to develop in independent areas without having to worry about the
entire codebase.
> > Merging with broken pieces in this case was necessary.
>
> I absolutely, nonequivocally disagree.
>
Preposterous!
>
> > Some more thoughts to fuel discussion...
> > 2) shims:
> > Sandy would say we could use shims here, wait for the other parts to be
> > merged, then remove the shims in such a way that nothing is ever broken.
> I
> > think a procedure like this works in the vast majority of cases, and
> makes
> > for smaller merges with trunk. Certainly we don't want shims in place any
> > more than we want skipped tests, so how do we get the pieces updated? Who
> is
> > going to do the updating?
>
> Maybe I'm not completely clear on what you mean by "shims". Do you
> mean wrappers for backwards compatibility?
>
Basically yeah, but they don't always need to be wrappers. Another example
would be structure added to a model that presents a changed underlying db
table in the same manner as it was presented before the change, allowing
someone to change a db table without being required to update all of the
code that refers to it at the same time. Bits and pieces could be merged
over time and once they are all merged, then the model structure (shim) is
removed.
>
> > 3) skipped tests:
> > Very visible. Everyone sees them. But very bad habits can be formed
> around
> > using them.
>
> a) Not everyone sees them. I rarely sit around and stare at the test
> suite run. If it fails, yes, I go back and look, but if it all passes,
> I don't care much about its output. This is, in fact, how I discovered
> this: I made a change to the test suite that I *knew* should make it
> fail, but it didn't. *Then* I went and looked and saw all of these.
Successful test run output:
"----------------------------------------------------------------------
Ran 1049 tests in 206.878s
OK (SKIP=44)"
Even if it all passes, you see the "SKIP=44" at the end. Either way, at this
point the cat is over the wall so let's discuss remedy instead of the
original reasons for taking this action. How should we handle situations
like this in the future?
> b) I never for a second assumed that a skipped test meant that it was
> my problem to fix it. If someone disables a test, I assume they're
> working around the clock(!) to fix the problem.
>
If the tests are skipped in an area of the code that you are responsible
for, find out why and remedy the problem. This was the original purpose for
lieutenants, there was no one to communicate these problems to at the time;
no one responsible for the work.
> > 4) dropped support:
> > We've decided to add feature X to Nova. Developer A determines adding X
> > requires nontrivial changes to hypervisors Q W E and R (and yes, soren,
> > their tests). Developer A, being familiar with Q, updates Q (and tests)
> to
> > work with Nova+X; however, W E and R are still broken. If requests for
> > option 1 above fail and cooperation doesn't seem to happen and things
> drag
> > out, we should remove the shims, merge, and drop support for W E and R
> from
> > Nova until updated code (with tests) for W E and R is proposed for
> merging
> > into trunk.
>
> I'm perfectly fine with this. If no-one wants to maintain a particular
> hypervisor, it gets dropped. This is perfectly reasonable.
>
So is this our new route? When code needs to hit trunk and would break an
API or a hypervisor, network plugin, etc, we immediately drop support?
Should we wait a milestone first? What are the criteria for becoming
supported again? I'm fine with this route over skipped tests as well, we
just need to make sure we've covered the particulars and tried options 1 and
2 first.
-trey
Follow ups
References