← Back to team overview

yade-dev team mailing list archive

Re: stability & compatibility between newer and older versions of yade


Bruno Chareyre said:     (by the date of Sun, 6 Jan 2019 17:08:12 +0100)

> Thanks for raising this major issue. It would be great to populate unit
> tests indeed.
> I recently
> https://github.com/yade/trunk/commit/b5fbefc6463294f580296cb5727dbbfd733fa8a0
> introduced a regression test for the utils module and I would like
> to advertise it here. It is testing only one function from utils
> currently.

Very interesting, thank you!

I have in mind writing testing code in C++, so that's a little different.

> It needs volunteers to expand it (which can be done simply by reproducing
> the logic of testUserCreatedInteraction() in more functions). If nobody is
> going to add tests systematically - the ideal case - I would suggest at
> least that:
> *when a bug is fixed a unit test is added simultaneously*.
> Fixing a bug usually gives a fresh vision of the behavior, which makes
> writing a unit test easier.
> Ultimately we could even collectively agree that a bug is not fixed if
> there is no test proving it.

I would go a bit further: assume that current yade version is the
reference version. Then I would add test to every C++ function and class
method. Store the results of tests as a reference result. When a bug
is found, then the reference result would change. Or worse: it would
turn out that although function is tested, the test did not catch the
bug. And that would be great incentive to add a test case where this
bug happened.

And also this would be a lot easier for others: the code testing
this function is already written, only another set of input
parameters must be added to test this case where bug appeared.

Yes, I know that it is a crazy amount of work.

> About 1: would be great provided that it doesn't end-up in simply removing
> examples which do not work.

Definitely not. The main goal is to really fix all examples. This is
a perfect opportunity for me to see the latest additions to yade! :)

> Classifying examples is also an important point and I would discourage the
> previous approach of moving failing scripts to a special
> "examples/not-working" folder since it breaks the classification in
> subfolders. Better rename them (something like *.py.fail) while keeping
> them in their original location.
> It is less clear if/how you intend to implement the "all examples must
> work" policy. It is difficult to automatize testing of examples since they
> are very heterogeneous. For instance some examples don't have a O.run() as
> user is supposed to click "play" instead.
> If the error happens after playing the error will not be detected. I
> suspect many other special situations like this one.

Maybe I would be able to implement this idea in following manner:
run yade on each example with extra flag --test-example or such.
This flag would mean that O.run() must be invoked anyways. If user
has to click it, then yade instead does it. Some parts of examples
would be untestable like interaction with GUI, in such cases a dummy
function would be called instead (the point is that example.py need
not be modified, the --test-example flag should take care of that). If
examples produce some output file, than that is checked too. I am not
sure how it will turn up. That's just a general idea. 

> About 2. I support the idea of investigating new techniques yet I don't
> understand the suggestion very well. My impression is that all plugins are
> already eligible for unit tests. For instance, testing a function from
> utils in [1] did not need any change to the utils module itslef. All it
> needs is to effectively design and write the unit tests for each other
> function of each other class/module. That's indeed hundreds - if not
> thousands - of tests.

Well, time for me to learn what boost::unit_tests has to offer ;)

The general idea within the framework is that it would be able to
print a list of all publicly accessible C++ methods (not necessarily
all of them being exported to python) which do not have an
accompanying test.

I don't know how to achieve this now. That's just an idea.

Then using that list we would know the test coverage ;) If this list
would someday become empty then we could say with confidence that we
have 100% test coverage. If someone writes a new public function in
some class, then even without exporting it to python, it would be
caught and printed as a warning, that it has no accompanying test.

Your _Tesselation<TT>::VertexHandle _Tesselation<TT>::move(…)
should be caught automatically.

I hope that it is possible. Maybe only a slight modification to
YADE_PLUGIN or similar macro would be just enough? I don't know yet.
Or maybe use some code for reading the library objects: it would go
though all functions inside the binary library file, and try{}catch{}
attempt to test them. I know that it is possible to read library
symbols, I need to check how to do that.
In that case each instance of _Tesselation<TT>::move(…) for all TT
that ended up in the library file would be caught.

Janek Kozicki

Follow ups