← Back to team overview

dolfin team mailing list archive

Benchmarks

 

Johannes has the benchbot up and running with a number of different
DOLFIN versions and ready to backport the benchmarks. What we are
missing is a set of good benchmarks.

I have started to reorganize the benchmarks under bench/ as follows:

1. One directory - one benchmark

Each benchmark is in a separate directory and does only one thing

2. One top level script (bench.py) to run all benchmarks

Looks in subdirectories for files named 'bench', runs, times and
reports result.

3. The bench script stores all timings to log/bench.log and results
can be plotted using plot.py (to be added). The file bench.log
contains results with older versions back as long as we can manage to
extract them.

4. No timings performed by the benchmarks themselves


Anyone is welcome to help out with building a set of suitable
benchmarks. They should preferably be simple so that they are easy to
maintain, easy to backport, and easy to interpret. (But we could also
have some bigger benchmarks that test many things at once.)

--
Anders

Attachment: signature.asc
Description: Digital signature


Follow ups