← Back to team overview

dolfin team mailing list archive

Re: [HG DOLFIN] Get parallel assembly and solve working (?).

 

On Mon, Jun 22, 2009 at 1:44 PM, Anders Logg<logg@xxxxxxxxx> wrote:
> On Mon, Jun 22, 2009 at 12:07:13PM +0100, Garth N. Wells wrote:
>>
>>
>> DOLFIN wrote:
>> > One or more new changesets pushed to the primary dolfin repository.
>> > A short summary of the last three changesets is included below.
>> >
>> > changeset:   6359:0ebf881aba3d691ef652ecddbea869060aa9b1b2
>> > tag:         tip
>> > user:        Anders Logg <logg@xxxxxxxxx>
>> > date:        Mon Jun 22 00:14:29 2009 +0200
>> > files:       dolfin/fem/Assembler.cpp dolfin/function/Function.cpp dolfin/la/PETScKrylovSolver.cpp dolfin/la/PETScKrylovSolver.h dolfin/la/PETScLUSolver.cpp dolfin/la/PETScMatrix.cpp dolfin/la/PETScMatrix.h dolfin/la/PETScVector.cpp dolfin/mesh/MeshPartitioning.cpp sandbox/passembly/main.cpp
>> > description:
>> > Get parallel assembly and solve working (?).
>> >
>>
>> Why the question mark?
>
> I got it running (as in not crashing) and the Krylov solve converges
> which seems a good indication, but I haven't done any testing.
>
> Except for testing, a few things remain:
>
>  1. Handling direct solve (notice dolfin_set("linear solver", "iterative")
>
>  Also note a couple of FIXMEs in PETScMatrix.cpp and KrylovMatrix.cpp.
>
>  2. Getting plotting to work (or decide how to handle it)

Once the xml issue is figures out (see below), I'll extend Viper such
that it can read multiple FunctionPlotData files and visualize these
simultaniously in the same render window.

>  3. Getting output working, both XML and VTK

I've been looking at this in relation to plotting today. All is good
until a vector is constructed in FunctionPlotData, since the number of
vertices of the local mesh is used to construct the PETSc Vector for
storing the interpolated values. DOLFIN then goes ahead and
distributes the storage of these local vectors (again). I'm trying to
figure out the correct way of solving this. We probably need to;

 * figure out which dofs are needed to do the interpolation onto the local mesh

 * send and retrieve these values

 * do the actual interpolation

 * write result to file

Ola

>  4. Reordering of the dof map to minimize communication
>
>  5. Handle interior facet integrals
>
> Of these 1-4 should be fairly "easy" to fix and 5 requires some thought.
>
> Anyway, parallel assembly and solve seems to work now so please dig in
> and have a look. It would be good to have some more eyes looking at
> how we initialize the PETSc matrices etc.
>
> Some notes about what's been done:
>
> 1. Assembler.cpp remains completely unchanged (not parallel aware)
> which is nice
>
> 2. Everything related to distributing and numbering the mesh is
> implemented in MeshPartitioning.cpp which is fairly complex but
> in good shape.
>
> 3. All MPI communication is done through DOLFIN wrappers (in MPI.h/cpp)
> which only use std::vector (with exception for MPI::send_recv
> which is used in one place). These wrappers are
>
>  MPI::gather
>  MPI::distribute
>  MPI::send_recv
>
> These seem to cover most things we need to do. In particular MPI::distribute
> is very useful and simplifies the code a great deal. It's based on a
> communication pattern we found in Niclas Jansson's original code.
>
> 4. We've avoided things like if (MPI::num_processes() > 1) in most
> places. Instead, other indicators are used, like comparing the local
> range of a sparsity pattern to its global range.
>
> 5. The dof map is computed just as before directly by the generated
> code generated for ufc::dof_map::tabulate_dofs, but we now send in the
> global entity indices instead of the local ones. This is handled in
> the classes UFCCell and UFCMesh.
>
> 6. Each process has its own mesh which is just a standard DOLFIN
> mesh with some extra data stored in MeshData:
>
>  "global entity indices %d"
>  "overlap"
>  "num global entities"
>
>> When I have time I'll look at getting rid of PVTKFile and extending
>> VTKFile to handle output in parallel.
>
> ok, nice.
>
> --
> Anders
>
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1.4.9 (GNU/Linux)
>
> iEYEARECAAYFAko/bqUACgkQTuwUCDsYZdE+bgCdHgZVhGv3TAq8sJg94enVe1kr
> HNUAni2qlRGfUSiSktJ0eQ0MIA+i4TpN
> =0Il6
> -----END PGP SIGNATURE-----
>
> _______________________________________________
> DOLFIN-dev mailing list
> DOLFIN-dev@xxxxxxxxxx
> http://www.fenics.org/mailman/listinfo/dolfin-dev
>
>



-- 
Ola Skavhaug


References