← Back to team overview

launchpad-dev team mailing list archive

Re: Experiment proposal: Optional Reviews

 

I didn't want to get into itty-bitty side threads given the shall we
say explosive nature of the proposal :), so I've kindof let this
percolate.. hope that's ok.

I want to be clear that I value code review, I think it's a very
beneficial thing to do most of the time. However the depth of concern
about making it optional is quite illuminating; it suggests several
things simultaneously:
 - there is a commonly held perception that many problematic changes
are only reviewed because they *have to be* - that folk would avoid
getting /problem branches/ reviewed
 - and that there is a strong feeling that the current code review
system avoids a lot 'bad stuff' / adds real value to the system
 - and that an optional review process would somehow reduce the value
of the review system itself.

I'm not sure if those things are true or false, but I don't subscribe
to them:
 - I don't believe responsible adults who value code review will
choose to bypass the codereview process unless they really don't think
a review will help the team overall
 -  I don't think the current review system reduces our defect rate
substantially (based on the very real
defects we keep adding to the codebase)
 - I think an optional review process will strengthen the review
system by challenging it to
add value and inviting us to find better ways to improve the system,
the team's knowledge and skills and our software structure.


There are some details about the experiment to drill into though:


Specific proposed changes to the experiment:
 - allow anyone to have optional reviews
   This could be interesting, but I think it takes a while to gain enough
   knowledge of the system to know just how much you don't* know.
   Maybe in a future iteration.
 - allow reviewers to have optional reviews
  I proposed 3 months because I think in that time period one will
  learn enough to know when one knows and when one doesn't: that's the
  key metric that matters. I think that someone who isn't a reviewer but
  has been landing regular branches for 3 months is certainly past this
  level of knowledge. I'd *like* us to use 3 months, but if it makes the
  difference between doing/not doing the experiment, then 'is a full
  reviewer' would be tolerable.
 - length
  until the epic sounds fine
 - post-landing evaluations per month.
  I suggested 1 per reviewer per month on the basis that:
  - there are ~ 20 active review/devs
  - we land 200 branches a month (on devel)
  - we'd need a 10% unreviewed rate to even reach one post-landing
    review per month per reviewer.
  So I suggest staying with 1 per month and if the unreviewed rate is over 30%:
  reviewing 1 branch per reviewer per month would review 1/3 of the unreviewed
  branches - more than enough to form an assessment

I do like the idea of catching problems early, but let's do that
explicitly: I will review *all* unreviewed landings (other than ones I
do) in the two weeks, and if I land anything unreviewied, someone else
- say jml - can review those.
 - javascript reviews
  - sure, let's leave them out of scope

-Rob



Follow ups

References