← Back to team overview

dolfin team mailing list archive

PETSc and MPI

 

Hi there

I have been trying to get into some MPI programming using dolfin, but have
run across some difficulties.

>From what I can gather from comments on the mailing list, the parallel
support is slightly limited at the moment and I hope that I could perhaps
contribute by having a look at this.

For my particular application I need to solve a number of eigensystems and
thus the only way to do it in dolfin is to use SLEPc (and thus PETSc).
These systems are actually independant of each other, so I would like to
write an MPI program that distributes the mesh (which is common to each
sollution) and then as a single process assembles the matrices and solves
the resultant eigen systems using slepc.

Now, the probem is, as soon as I execute my program in parallel (mpirun -np
2 mpi_test for example), then the Assembler assumes that it is supposed to
run in parallel and the PETScMatrix is also distributed.  My idea to
circumvent this is to provide an additional constructor that allows the user
to set the *parallel* member variable as well as adding such a constructor
and variable to PETScMatrix().  This seems to work, and I am able to
assemble two different matrices in two different processes.  Does anybody
have any comments or ideas as to the feasibility of such an approach or what
the propper way would be to do it?

Thanks
Evan