← Back to team overview

dolfin team mailing list archive

Re: [HG] Add test file for parallel assembly. Results appear OK for 2D Poisson equation.

 

On Sat, Dec 02, 2006 at 01:21:54AM +0100, Garth N. Wells wrote:
> Anders Logg wrote:
> >On Fri, Dec 01, 2006 at 03:49:42PM +0100, Garth N. Wells wrote:
> >>
> >>DOLFIN wrote:
> >>>One or more new changesets pushed to the primary DOLFIN repository.
> >>>A short summary of the last three changesets is included below.
> >>>
> >>>changeset:   2478:8aeea68fcf1ca3dc758c9fb417d5da35bd1de922
> >>>tag:         tip
> >>>user:        "Garth N. Wells <g.n.wells@xxxxxxxxxx>"
> >>>date:        Fri Dec 01 15:41:19 2006 +0100
> >>>files:       src/test/passembly/Makefile src/test/passembly/main.cpp
> >>>description:
> >>>Add test file for parallel assembly. Results appear OK for 2D Poisson 
> >>>equation.
> >>>
> >>To run the test, you need to add the path to the ParMETIS header files 
> >>and libraries manually in src/test/passembly/Makefile, and compile 
> >>DOLFIN with PETSc enabled.
> >>
> >>Then to assemble using 4 processes, do
> >>
> >>mpirun -np 4 ./dolfin_parallel-test
> >>
> >>You can use however many processes you want.
> >>
> >>Garth
> >
> >This looks very good. We should be to create a simple abstraction for
> >the parallelization where one does not need to see the MPI calls or
> >PETSC_COMM_WORLD.
> >
> 
> I'm not sure that the assembly is correct, so I'm planning to play with 
> it for a while to understand things better. Once I understand things 
> better, it should be possible to hide the details and I'll think about 
> it more.

ok.

> Eventually when partitions of the mesh are distributed to the processors 
> (or we can create sub-meshes), the assembly functions shouldn't need to 
> know if we a running parallel or not. IO will be the most difficult, and 
> we'll need to think about Functions when the underlying mesh/vector is 
> distributed across processors.

Agree.

>  > Should we let PETSc initialize MPI or is there a way we can do it and
> >then tell PETSc how we have initialized?
> 
> For now, it's simplest to let PETSc initialise MPI. This can be changed 
> later if we need to.

ok.

> One thing to consider is more sophisticated dof mapping. It is worth 
> considering a special class to take care of this. This would be useful 
> for meshes with mixed cell/element types, parallel assembly and 
> computing sparsity patterns.

Yes, we could create a separate class in src/fem/ that takes care of
this. Name suggestions: NodeMap, NodeMapping, Nodes, Dofs, DofHandler,
...

FFC generates *one* local-to-global mapping but we are free to reorder
what FFC generates. The new class should take input from FFC, reorder
it and then make the new ordering accessible during assembly.

> Is there a preference to eventually add partitioning functions in 
> src/kernel/mesh, or src/kernel/partition?

I suggest src/kernel/mesh since this is at the same level as
refinement and coarsening.

/Anders


Follow ups

References