ffc team mailing list archive
-
ffc team
-
Mailing list archive
-
Message #01794
Re: [HG FFC] Added test suite to verify correctness of tabulate_tensor() compared to reference values.
Quoting Anders Logg <logg@xxxxxxxxx>:
> On Wed, Sep 10, 2008 at 11:17:32AM +0200, Kristian Oelgaard wrote:
> >
> > Hi,
> >
> > This is my standard procedure for FFC development:
> >
> > 1. Modify FFC
> > 2. Run regression tests
> > 3. Regression tests fails
> > 4. Look at code, to see if it makes sense
> > 5. Generate new references
> > 6. Push to repository
> >
> >
> > Instead of step 4 it would obviously be better to actually check if the new
> code
> > still computes the right thing. To this end I've created a module that
> verifies
> > if tabulate_tensor() is correct according to some reference. The module
> needs
> > ufc_benchmark to run.
> >
> > have a look at ffc/src/test/verify_tensor/test.py
> >
> > ./test.py -h
> >
> >
> > Kristian
>
> I've looked at it and it looks very good. Will you add references for
> all the forms?
Sure, I didn't want to flood the repository with a lot of references if we
decided we didn't need it. Currently, I'm assembling over the reference
elements. Would it be better to use arbitrary elements? I'm just wondering if
certain bugs will be picked up by an element defined with a lot of zeros and ones.
> Is the idea that we run this only when the regression tests fail
> (since it may take some time to run)?
Yes, if the regression test do not fail, the code will return the same values as
last time the verify_tensor/test.py was run. This is why I didn't include it in
the top test.py script.
Kristian
> --
> Anders
>
Follow ups
References