dolfin team mailing list archive
-
dolfin team
-
Mailing list archive
-
Message #05416
Re: Parallelization
On 8/27/07, Anders Logg <logg@xxxxxxxxx> wrote:
> jesperc wrote:
> > Hi,
> >
> 4. The first thing we need is to be able to communicate Mesh and
> MeshFunction between processes. There is a sketch in
> src/kernel/mesh/MPIMeshCommunicator.cpp (empty). One process reads the
> mesh, partitions it and sends it to all processes. (To begin with, each
> process will have a copy of the mesh.)
I am attaching a preprint which details our current scheme for this.
> 5. The tricky part will be how to order degrees of freedom (class
> DofMap). And this relates to ongoing work on merging the linear algebra
> interfaces of DOLFIN and PyCC (Simula in-house code). We need to find
> a design that works well for communicating both with PETSc and Epetra
> (Trilinos).
You can see how we are doing it in PETSc by looking at src/dm/mesh/sieve/Mesh.hh
in setupField(). However, this is still not completely correct since
it does not handle
vector elements correctly. I am fixing this now.
Thanks,
Matt
> I'm sure Garth and Martin will have further opinions.
>
> /Anders
>
> _______________________________________________
> DOLFIN-dev mailing list
> DOLFIN-dev@xxxxxxxxxx
> http://www.fenics.org/mailman/listinfo/dolfin-dev
>
--
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which
their experiments lead.
-- Norbert Wiener
Follow ups
References