Thread Previous • Date Previous • Date Next • Thread Next |
Anders Logg wrote:
On Fri, Dec 01, 2006 at 03:49:42PM +0100, Garth N. Wells wrote:DOLFIN wrote:To run the test, you need to add the path to the ParMETIS header files and libraries manually in src/test/passembly/Makefile, and compile DOLFIN with PETSc enabled.One or more new changesets pushed to the primary DOLFIN repository. A short summary of the last three changesets is included below. changeset: 2478:8aeea68fcf1ca3dc758c9fb417d5da35bd1de922 tag: tip user: "Garth N. Wells <g.n.wells@xxxxxxxxxx>" date: Fri Dec 01 15:41:19 2006 +0100 files: src/test/passembly/Makefile src/test/passembly/main.cpp description:Add test file for parallel assembly. Results appear OK for 2D Poisson equation.Then to assemble using 4 processes, do mpirun -np 4 ./dolfin_parallel-test You can use however many processes you want. GarthThis looks very good. We should be to create a simple abstraction for the parallelization where one does not need to see the MPI calls or PETSC_COMM_WORLD.
I'm not sure that the assembly is correct, so I'm planning to play with it for a while to understand things better. Once I understand things better, it should be possible to hide the details and I'll think about it more.
Eventually when partitions of the mesh are distributed to the processors (or we can create sub-meshes), the assembly functions shouldn't need to know if we a running parallel or not. IO will be the most difficult, and we'll need to think about Functions when the underlying mesh/vector is distributed across processors.
> Should we let PETSc initialize MPI or is there a way we can do it and
then tell PETSc how we have initialized?
For now, it's simplest to let PETSc initialise MPI. This can be changed later if we need to.
One thing to consider is more sophisticated dof mapping. It is worth considering a special class to take care of this. This would be useful for meshes with mixed cell/element types, parallel assembly and computing sparsity patterns.
Is there a preference to eventually add partitioning functions in src/kernel/mesh, or src/kernel/partition?
Garth
/Anders
Thread Previous • Date Previous • Date Next • Thread Next |