← Back to team overview

dolfin team mailing list archive

Re: Parallelization

 

Martin Sandve Alnæs wrote:
2007/8/27, Anders Logg <logg@xxxxxxxxx>:
jesperc wrote:
Hi,

I'm taking a summer course in parallel programming and want to
parallelize something in Dolfin as a small project for the course. How
much is currently sucessfully parallelized in the code when for example
solving the Poisson equation? I know that some of you are working with
the parallel implementation of the assembly, but what is the status there?
Would it be possible for me to make some contribution?
Yes, definitely. Here's what we have now:

1. Mesh partitioning works:

     uint n = 16;
     MeshFunction<uint> partitions;
     mesh.partition(n, partitions);

This returns a MeshFunction which labels all the cells of the mesh with
the number of the partition to which they belong.

For proper scalability, distributed meshes is necessary, but I guess
you're on top of this.

2. Garth has implemented a working (last time I checked) prototype of a
parallel assembler in src/sandbox/passembly/main.cpp.

3. It should be relatively easy to extend the current assembler in
src/kernel/fem/Assembler.cpp to do parallel assembly. It currently knows
how to assemble over subdomains (defined by some MeshFunctions) and the
parallel assembly would be similar: skip the cells (if (.. != ... )
continue;) that don't belong to the current processor.

It should rather iterate over cells that _are_ on the current
processor. Even if the mesh isn't distributed yet, this is probably
important.

Why is it important? Iterating over all cells and skipping is very cheap (I imagine). Doing ++cell_iterator in DOLFIN only increases an int counter (inline). It probably takes more time to preprocess and extract the list of cells belonging to the process.

/Anders


4. The first thing we need is to be able to communicate Mesh and
MeshFunction between processes. There is a sketch in
src/kernel/mesh/MPIMeshCommunicator.cpp (empty). One process reads the
mesh, partitions it and sends it to all processes. (To begin with, each
process will have a copy of the mesh.)

Sounds good.

5. The tricky part will be how to order degrees of freedom (class
DofMap). And this relates to ongoing work on merging the linear algebra
interfaces of DOLFIN and PyCC (Simula in-house code). We need to find
a design that works well for communicating both with PETSc and Epetra
(Trilinos).

This problem can probably benefit from someone having the time to
really focus, there are a lot of details to consider...




Follow ups

References