← Back to team overview

dolfin team mailing list archive

Re: [HG DOLFIN] Get parallel assembly and solve working (?).

 

On Mon, Jun 22, 2009 at 12:07:13PM +0100, Garth N. Wells wrote:
> 
> 
> DOLFIN wrote:
> > One or more new changesets pushed to the primary dolfin repository.
> > A short summary of the last three changesets is included below.
> > 
> > changeset:   6359:0ebf881aba3d691ef652ecddbea869060aa9b1b2
> > tag:         tip
> > user:        Anders Logg <logg@xxxxxxxxx>
> > date:        Mon Jun 22 00:14:29 2009 +0200
> > files:       dolfin/fem/Assembler.cpp dolfin/function/Function.cpp dolfin/la/PETScKrylovSolver.cpp dolfin/la/PETScKrylovSolver.h dolfin/la/PETScLUSolver.cpp dolfin/la/PETScMatrix.cpp dolfin/la/PETScMatrix.h dolfin/la/PETScVector.cpp dolfin/mesh/MeshPartitioning.cpp sandbox/passembly/main.cpp
> > description:
> > Get parallel assembly and solve working (?).
> > 
> 
> Why the question mark?

I got it running (as in not crashing) and the Krylov solve converges
which seems a good indication, but I haven't done any testing.

Except for testing, a few things remain:

  1. Handling direct solve (notice dolfin_set("linear solver", "iterative")

  Also note a couple of FIXMEs in PETScMatrix.cpp and KrylovMatrix.cpp.

  2. Getting plotting to work (or decide how to handle it)

  3. Getting output working, both XML and VTK

  4. Reordering of the dof map to minimize communication

  5. Handle interior facet integrals

Of these 1-4 should be fairly "easy" to fix and 5 requires some thought.

Anyway, parallel assembly and solve seems to work now so please dig in
and have a look. It would be good to have some more eyes looking at
how we initialize the PETSc matrices etc.

Some notes about what's been done:

1. Assembler.cpp remains completely unchanged (not parallel aware)
which is nice

2. Everything related to distributing and numbering the mesh is
implemented in MeshPartitioning.cpp which is fairly complex but
in good shape.

3. All MPI communication is done through DOLFIN wrappers (in MPI.h/cpp)
which only use std::vector (with exception for MPI::send_recv
which is used in one place). These wrappers are

  MPI::gather
  MPI::distribute
  MPI::send_recv

These seem to cover most things we need to do. In particular MPI::distribute
is very useful and simplifies the code a great deal. It's based on a
communication pattern we found in Niclas Jansson's original code.

4. We've avoided things like if (MPI::num_processes() > 1) in most
places. Instead, other indicators are used, like comparing the local
range of a sparsity pattern to its global range.

5. The dof map is computed just as before directly by the generated
code generated for ufc::dof_map::tabulate_dofs, but we now send in the
global entity indices instead of the local ones. This is handled in
the classes UFCCell and UFCMesh.

6. Each process has its own mesh which is just a standard DOLFIN
mesh with some extra data stored in MeshData:

  "global entity indices %d"
  "overlap"
  "num global entities"

> When I have time I'll look at getting rid of PVTKFile and extending 
> VTKFile to handle output in parallel.

ok, nice.

-- 
Anders

Attachment: signature.asc
Description: Digital signature


Follow ups

References