← Back to team overview

dolfin team mailing list archive

Re: Benchmarks

 

On Tue, Mar 30, 2010 at 01:02:52PM +0200, Johan Jansson wrote:
> Anders Logg wrote:
> >Johannes has the benchbot up and running with a number of different
> >DOLFIN versions and ready to backport the benchmarks. What we are
> >missing is a set of good benchmarks.
> >
> >I have started to reorganize the benchmarks under bench/ as follows:
> >
> >1. One directory - one benchmark
> >
> >Each benchmark is in a separate directory and does only one thing
> >
> >2. One top level script (bench.py) to run all benchmarks
> >
> >Looks in subdirectories for files named 'bench', runs, times and
> >reports result.
> >
> >3. The bench script stores all timings to log/bench.log and results
> >can be plotted using plot.py (to be added). The file bench.log
> >contains results with older versions back as long as we can manage to
> >extract them.
> >
> >4. No timings performed by the benchmarks themselves
> >
> >
> >Anyone is welcome to help out with building a set of suitable
> >benchmarks. They should preferably be simple so that they are easy to
> >maintain, easy to backport, and easy to interpret. (But we could also
> >have some bigger benchmarks that test many things at once.)
> >
> Hi,
>
> I think this looks good, just one comment: the benchmarks themselves
> should not be written in Python (or all benchmarks should have a
> Python and C++ version), since some architectures don't have Python
> support on the compute nodes, for instance the BlueGene/L.
>
>  Johan

I don't think that's a big problem. The benchmarks are intended to be
run on a dedicated benchbot running a standard Ubuntu installation.

For testing parallel speedup etc we need to create a different type of
benchmark intended for comparing different architectures (not
comparing different DOLFIN versions on the same machine).

--
Anders

Attachment: signature.asc
Description: Digital signature


References