← Back to team overview

yade-dev team mailing list archive

Re: TriaxialTest regression tests.

 

I got the same idea this WE. Maybe commit scripts and results in trunk/scripts/checks, or
upload them on wiki - as they can give at the same time simulation exemples?

Bruno

On 06/12/10 09:06, Anton Gladky wrote:
> Ok, I think it is good to have such tests anyway. We can keep them
> separately from regression ones.
> Let's call "check tests" or...
> 
> And they should not be mandatory for packaging. "Just to check". If
> somebody commits something and it sufficiently changes result value - it
> can be serious.
> 
> I usually do it, for my tests. I compile new YADE version and start
> there my "standart" test with "-j 1" to have always the same results. If
> it is ok - I use new YADE-version for work.
> 
> I think it is relatively easy to do. Anybody can give their 'common"
> test with usual result value (for example strength of specimen) and we
> can compare this value from version to version. And it is good way to
> check constitutive laws, geometry-modules etc.
> 
> What do you think?
> 
> Anton
> 
> 
> 
> On Fri, Dec 3, 2010 at 8:26 PM, Bruno Chareyre
> <bruno.chareyre@xxxxxxxxxxx <mailto:bruno.chareyre@xxxxxxxxxxx>> wrote:
> 
>     On 03/12/10 17:13, Václav Šmilauer wrote:
>     > Hi Anton, I am quite opposed to adding regression tests where the
>     result
>     > cannot be established analytically, because they don't indicate whether
>     > a failure is actually a "progression" or "regression". We discussed
>     that
>     > with Bruno, who does not agree, few months back already. v
>     >> As I understand TriaxialTest one of the most popular test in Yade.
>     >> Can anybody add a regression test for that?
> 
>     I see pros and cons.
> 
>     There is much implication on how yade is developped.
>     Currently, the situation when you work on someting, do other things
>     for a few days, then
>     update and find different behaviour when you resume coding is quite
>     common. Vaclav is
>     doing a lot to put warnings at compile- and run-time to help people
>     fix their code and
>     scripts.
>     But still, there are many ways for commiters to break someone-else's
>     work without notice.
>     With last changes in cohesive laws, all my scripts testing
>     cylinder-sphere-wall problems
>     were broken, if I had been away for a few days it would have taken
>     some time to figure out
>     why, because it concerns some code still in development, that didn't
>     work even before the
>     commit (not blaming Chiara, just mentionning the usefulness of the
>     corresponding reg. test).
> 
>     It explains a lot regarding why McGill FEM-DEM coupling, Feng-Chen's
>     CFD-DEM, Luc's
>     LBM-DEM are not in the trunk, and why so many users will never update.
>     If more people were putting a basic regression test corresponding to
>     what they are
>     developping, it would put more responsability on commiters, and it
>     would help to clearly
>     indentify what commit modified a given behaviour.
> 
>     OTOH, it is clear that (1) a change is not always a bad thing, and
>     that (2) global tests
>     can sometimes give false alarms.
> 
>     For (1), I think it is not really a difficult problem. For instance
>     : if I identify a
>     mistake in, lets say, Newton Integration, I can expect that all
>     "global" regression tests
>     using Newton will send warnings. I can even update the target values
>     in reg. tests before
>     commiting. I can't imagine a situation where someone would improve
>     something or fix a bug
>     that was affecting everybody, without realizing that it is really
>     affecting everybody.
>     Anyway, if someone fixes a bug that would have been affecting
>     triaxial test for so many
>     years, it should be notified to all TT users.
> 
>     For (2), it will happen. It happened, in fact, the first time
>     cohesive-chain test failed
>     (testing compiler-sensitive numerical noise). The second warning was
>     real though. Clearly,
>     one should think twice before defining target values and warning
>     threshold.
> 
>     Unit tests testing each small thing separately are very usefull. The
>     only problem is they
>     hardly identify 1% of the "oh god, why is it not like yesterday"
>     problems that makes the
>     daily life of yade users.
> 
>     My two cents.
> 
>     Bruno
> 
>     _______________________________________________
>     Mailing list: https://launchpad.net/~yade-dev
>     <https://launchpad.net/%7Eyade-dev>
>     Post to     : yade-dev@xxxxxxxxxxxxxxxxxxx
>     <mailto:yade-dev@xxxxxxxxxxxxxxxxxxx>
>     Unsubscribe : https://launchpad.net/~yade-dev
>     <https://launchpad.net/%7Eyade-dev>
>     More help   : https://help.launchpad.net/ListHelp
> 
> 
> 
> 
> _______________________________________________
> Mailing list: https://launchpad.net/~yade-dev
> Post to     : yade-dev@xxxxxxxxxxxxxxxxxxx
> Unsubscribe : https://launchpad.net/~yade-dev
> More help   : https://help.launchpad.net/ListHelp

-- 
_______________
Bruno Chareyre
Associate Professor
ENSE³ - Grenoble INP
Lab. 3SR
BP 53 - 38041, Grenoble cedex 9 - France
Tél : +33 4 56 52 86 21
Fax : +33 4 76 82 70 43
________________



Follow ups

References