← Back to team overview

fenics team mailing list archive

Re: Parallel FEniCS performance as priority

 

On Fri, Feb 05, 2010 at 06:27:43PM +0100, Johan Jansson wrote:
> Anders Logg wrote:
> >On Thu, Feb 04, 2010 at 12:07:23AM +0100, Johan Jansson wrote:
> >>Hi!
> >>
> >>The upcoming FEniCS conference is a good opportunity for a directed
> >>push in the development in FEniCS. There has been activity in
> >>parallel computing in FEniCS over the last two years, but the
> >>critical mass to make parallel performance an integral part of
> >>FEniCS has not yet been attained.
> >>
> >>I would like to float the idea of making parallel performance the
> >>target for the conference. A realistic goal could be strong
> >>near-linear scaling up to ~100 CPUs for a non-trivial PDE (e.g.
> >>Navier-Stokes).
> >>
> >>Currently there are two branches of parallel development: 1. a
> >>branch based on DOLFIN 0.8.0 (the work of Niclas Jansson at CTL/KTH)
> >>and 2. the trunk of DOLFIN (joint effort by DOLFIN developers to
> >>integrate Niclas' branch with other parallel work and DOLFIN
> >>updates). The performance results are already there in 1, and
> >>progress has already been made in the integration in 2 (helped
> >>nicely by Anders Logg hosting a week of code in Smögen this fall).
> >>
> >>Parallel computing is one of the key research areas of the CTL group
> >>at KTH, and we intend to put significant effort into reaching the
> >>target, given that this strategy is adopted. I think by making
> >>parallel performance top priority in the project, it would also be a
> >>realistic target, and open up the project to new applications, more
> >>exposure, etc.
> >>
> >>Best,
> >> Johan
> >
> >Sounds good. It is also in line with what has been discussed earlier
> >that the focus from here on to the release of DOLFIN 1.0 (which will
> >hopefully happen in June) should be on performance and bug fixes (not
> >new features).
> >
> >But I'm surprised that the 0.8 branch is still in active use. Are you
> >actually still using it??? And why?
> >
> >Anyway, a top priority now should be to look at our current set of
> >benchmarks for DOLFIN and make sure they are interesting and cover
> >everything we want to test (probably not) and add the missing pieces.
> >We have recently bought a new server that will function as a dedicated
> >benchbot for FEniCS and report nightly results. That can be used to
> >track regressions and monitor the progress of the effort to improve
> >performance.
> >
> Ok, good! A set of benchmarks (including the scaling of
> Navier-Stokes I mentioned) would be a way of focusing the
> development. I agree that a feature freeze would be a good idea to
> prepare for a release, and this methodology is also the way I/we
> typically manage new code development: e.g. freeze a
> version/snapshot of the project that has been verified while
> developing a specific area (or as mentioned, trying to match
> benchmarks).

I don't think we are talking about a feature freeze yet, just that we
need to put more focus on fixing existing bugs and blueprints:

  https://bugs.launchpad.net/dolfin/+bugs
  https://blueprints.launchpad.net/dolfin/+specs?show=all

> As you say, a first step would then be to collect a list of
> benchmarks representing what we think FEniCS should be able to do,
> given the features that are already there. The conference can then
> serve as a deadline for satisfying these benchmarks, and as a
> workshop where we can determine if some benchmarks have not been
> satisfied, and where we can do work to sort out the remaining
> issues.

The current set of benchmarks is in the bench subdirectory. We need to
go through these, see which are still relevant and add missing pieces.

I have added a blueprint here:

  https://blueprints.launchpad.net/dolfin/+spec/benchmarks

The benchmark server (benchbot) is up and running. Johannes will get
started on setting up the benchmark framework once he has finished
packaging the latest releases. He will also try to backport as many of
the benchmarks as possible to earlier DOLFIN versions, maybe even back
to 0.2.0 which should generate some interesting data.

Let's discuss further details of the benchmarks on the DOLFIN mailing
list.

--
Anders

Attachment: signature.asc
Description: Digital signature


References