← Back to team overview

ffc team mailing list archive

Re: [HG FFC] Added test suite to verify correctness of tabulate_tensor() compared to reference values.

 

On Wed, Sep 10, 2008 at 11:33:27AM +0200, Kristian Oelgaard wrote:
> Quoting Anders Logg <logg@xxxxxxxxx>:
> 
> > On Wed, Sep 10, 2008 at 11:17:32AM +0200, Kristian Oelgaard wrote:
> > > 
> > > Hi,
> > > 
> > > This is my standard procedure for FFC development:
> > > 
> > >   1. Modify FFC
> > >   2. Run regression tests
> > >   3. Regression tests fails
> > >   4. Look at code, to see if it makes sense
> > >   5. Generate new references
> > >   6. Push to repository
> > > 
> > > 
> > > Instead of step 4 it would obviously be better to actually check if the new
> > code
> > > still computes the right thing. To this end I've created a module that
> > verifies
> > > if tabulate_tensor() is correct according to some reference. The module
> > needs
> > > ufc_benchmark to run.
> > > 
> > > have a look at ffc/src/test/verify_tensor/test.py
> > > 
> > > ./test.py -h
> > > 
> > > 
> > > Kristian
> > 
> > I've looked at it and it looks very good. Will you add references for
> > all the forms?
> 
> Sure, I didn't want to flood the repository with a lot of references if we
> decided we didn't need it. Currently, I'm assembling over the reference
> elements. Would it be better to use arbitrary elements? I'm just wondering if
> certain bugs will be picked up by an element defined with a lot of zeros and ones.

Yes, it would definitely be better to use another element. I suggest
randomizing a triangle and a tet and then sticking those numbers into
the code.

> > Is the idea that we run this only when the regression tests fail
> > (since it may take some time to run)?
> 
> Yes, if the regression test do not fail, the code will return the same values as
> last time the verify_tensor/test.py was run. This is why I didn't include it in
> the top test.py script.

ok.

-- 
Anders

Attachment: signature.asc
Description: Digital signature


Follow ups

References