maria-developers team mailing list archive
Mailing list archive
Re: [GSoC] Accepted student ready to work : )
On May 07, Pablo Estrada wrote:
> So here's what I'll do for the simulations:
> *1. Calculating the: "Relevancy index"* for a test, I have considered two
> simple options so far:
> - *Exponential decay*: The relevancy index of a test is the *sum over
> each failure* of( *exp((FailureTime - CurrentTime)/DecayRate))*. It
> decreases exponentially as time passes, and increases if the test fails.
> - DecayRate is
> - i.e. If TestA failed at days 5 and 7, and now is day 9, RI will
> be (exp(5-9)+exp(7-9)) = (exp(-4)+exp(-2)).
> - The unit to measure time is just seconds in UNIX_TIMESTAMP
> - *Weighted moving average*: The relevancy index of a test is: *R[now] =
> R[now-1]*alpha + fail*(1-alpha)*, where fail is 1 if the test failed in
> this run, and 0 if it did not fail. The value is between 1 and 0. It
> decreases slowly if a test runs without failing, and it increases slowly if
> the test fails.
> - 0 < alpha < 1 (Initially set at 0.95 for testing).
> - i.e. If TestB failed for the first time in the last run, and again
> in this run: R[t] = 1*0.95 + 1*0.5 = 1
> - If test B ran once more and did not fail, then: R[t+1] = 1*0.95 +
> 0*0.5 = 0.95
> - The *advantage of this method* is that it doesn't have to look at
> the whole history every time it's calculated (unlike the exponential decay
you don't need to look at the whole history for the exponential decay.
Because it is
exp((FailureTime - CurrentTime)/DecayRate)
You simply have
R[t] = exp(FailureTime/DecayRate) / exp(t/DecayRate)
R[t+1] = R[t] / exp(1/DecayRate) (if there was no failure)
> - Much like TCP protocol (http://www.cl.cam.ac.uk/~jac22/books/mm/book/node153.html)
> Regarding the *Relevancy Index*, it can be calculated grouping test results
> in many ways: *Roughly* using test_name+variation, or *more granularly* by
> *including* *branch* and *platform*. I'll add some thoughts regarding these
> options at the bottom of the email.
I've tested these options earlier, you may want to try them all too and
see which one delivers better results.
> *2. *To* run the simulation*, I'll gather data from the first few thousands
> of test_run entries, and then start simulating results. Here's what I'll do:
> 1. *Gather data *first few thousands of test_run entries (i.e. 4
> 2. After N thousand test_runs, I'll go through the test_run entries *one
> by one*, and using the data gathered to that point, I will select '*running
> sets*' of *100* *test suites* to run on each test_run entry. (The number
> can be adjusted)
Absolutely, it should be. This is the main parameter we can tune, after
all. The larger your running set is, the better will be the recall.
May be not now but later, but it would be very useful to see these
graphs, recall as a function of the running set size. It's important to
know whether by increasing the running set by 10% we get 1% recall
increase of 70% recall increase (as you've seen, there's a region when
recall increases very fast as the running set grows).
> 3. If in this *test_run* entry, the list of *failed tests* contains
> tests that are *NOT part* of the *running set*, the failure will be
> ignored, and so the information of this failure will be lost (not used as
> part of the relevancy index). *(See Comment 2)*
> 4. If the set of *failed tests *in the *test_run* entry intersect with
> the *running_set*, this is better *recall*. This information will be
> used to continue calculating the *relevancy index*.
Could you explain the terminology you're using?
What is a "test suite" and what is a "test run"?
How will you calculate the "recall"?
> According to the results obtained from the simulations, we can adjust the
> algorithm (i.e. to consider *relevancy index by* *platform* and *branch*,
> Comments about the *relevancy index:*
> - The methods to calculate the relevancy index are very simple. There
> are some other useful metrics that could be incorporated
> - *Time since last run. *With the current methods, if a*
> test*completely *stops
> running*, it only* becomes less relevant with time*, and so even if
> it could expose defects, it doesn't get to run because its
> relevancy index
> is just going down. Incorporating a function that* increases the
> relevancy index* as the *time since the last run* *increases* can
> help solve this issue. I believe this measure will be useful.
Right. I will not comment on this measure now.
But I absolutely agree that this is an issue that must be solved.
> - *Correlation between test failures*. If two tests tend to fail
> together, is it better to just run one of them? Incorporating
> this measure
> seems difficult, but it is on the table, in case we should consider it.
Agree. Taking correlations into account looks very promising, but it
does seem to be difficult :)
> - As you might have seen, I decided to not consider any data concerned
> with *code changes*. I'll work like this and see if the results are
> Comments regarding *buildbot infrasturcture:*
> These comments are out of the scope of this project, but it would be very
> desirable features for the buildbot infrastructure.
> - Unfortunately, given the data available in the database, it is NOT
> possible to know *which tests ran* on each *test_run*. This information
> would be very useful, as it would help estimate the *exact failure
> rate*of a test. I didn't look into the code, but it seems that *class
> most of the infrastructure necessary to just add one or two more tables (
> *test_suite*, and *test_suite_test_run*), some code, and start keeping
> track of this information.
> - Another problem with the data available in the database is that it is
> not possible to know *how many test suites exist*. It is only possible
> to estimate *how many different test suites have failed*. This would
> also be helpful information.
> - Actually, this information would be useful not only for this project,
> but in general for book-keeping of the development of MariaDB.