dolfin team mailing list archive
-
dolfin team
-
Mailing list archive
-
Message #05677
Re: Parallel assembly
On Fri, Nov 30, 2007 at 12:55:30PM +0000, Garth N. Wells wrote:
>
>
> Anders Logg wrote:
> > Magnus has done some initial work on implementing the missing
> > functions in MPIMeshCommunicator for broadcasting Mesh and
> > MeshFunction. (It seems to work but we need to clean it up a bit
> > before pushing.)
> >
>
> Any idea when it will be pushed? I was just starting to look at it, so
> I'll wait but don't want to wait too long and lose my enthusiasm.
>
>
> > To get further, we need to decide how to handle the parallel dof maps.
> > There is a class PdofMap in the sandbox. What does this do? (Garth)
> >
>
> It rearranges to dof map based on the mesh partition. From memory,
> dofs belong to process 0 are numbered 0 -> m, dofs belonging to process
> 1 are number m+1 -> n, etc.
ok, simple enough.
> > Should we clean it up and add it to the library? (And should we name
> > it pDofMap?)
> >
>
> First step is to restrict the appearance of ufc::dof_map to the class
> dolfin::DofMap. Hopefully we can get everything into DofMap and won't
> need pDofMap.
Let's hope so.
> During the initial development, it might be useful to have pDofMap and
> pAssembler.
Sounds good.
--
Anders
>
> > Also, how should we handle the selection between
> >
> > MatCreateSeqAIJ
> >
> > and
> >
> > MatCreateMPIAIJ
> >
> > in PETScMatrix? My suggestion would be to just add a simple check in
> > the constructors, something like
> >
> > if (MPIManager::numProcesses() > 1)
> > MatCreateMPIAIJ()
> > ...
> > else
> > MatCreateSeqAIJ()
> >
>
> Sounds ok, although the initialisation of parallel matrices does require
> more information than sequential matrices (global size, local size, etc).
>
> Garth
> _______________________________________________
> DOLFIN-dev mailing list
> DOLFIN-dev@xxxxxxxxxx
> http://www.fenics.org/mailman/listinfo/dolfin-dev
References