← Back to team overview

dolfin team mailing list archive

Re: Parallel assembly

 

On Dec 7, 2007 8:07 AM, Anders Logg <logg@xxxxxxxxx> wrote:
> This still sounds abstract too me. How do you implement it, and what
> is a name? Is it an integer? So you assemble into entries in the
> matrix that are triples, (k, i, j) where k is the name of the sub
> domain and then you communicate/move things around to get the global
> matrix?
>
> (I'm sure our approach is limiting, but we need to start somewhere.)

Sorry if it sounds abstract, but of course this is all implemented and running
on large, parallel problems. Sieve is templated over the identifier, so it can
be whatever you want. The default is integer. For matrix assembly, I just use
the PETSc mechanism right now, so you need a global numbering, but
afterwards, you can throw it away. I plan to eliminate this step.
Vector assembly
is already done through VecScatters which do not reference a global
numbering. They match pairs (p, e) on different processes. The operator would
work in a similar fashion by matching (p, e) --> (q, cone(f)).

  Matt

> --
> Anders
>
>
>
> > > >> > Adapt mesh reading for the new representation, store mesh data based
> > > >> > on number of local cells/vertices instead of parsed numbers. This
> > > >> > modification allows processors to read different parts of the mesh
> > > >> > in parallel making an initial distribution step unnecessary.
> > > >> >
> > > >> > Loading meshes in parallel should increase efficiency, reduce cost
> > > >> > full communication and save memory for large scale problem, given
> > > >> > that the parallel environment have a shared file system that could
> > > >> > handle the load. However the serial distribution should still be
> > > >> > implemented to support environment without shared file systems.
> > > >> >
> > > >> >  Modification for the new representation should be implemented in
> > > >> >  class XMLMesh. Functions for initial mesh distribution should be
> > > >> >  implemented in a new class.
> > > >>
> > > >> For this, we should add optional data to the mesh format, such that
> > > >> the current file format still works. If additional data is present,
> > > >> then that is read into MeshNumbering, otherwise it is empty.
> > >
> > > Yes, that is natural (that the serial/regular mesh format should still
> > > work without modifications). Although, I am not sure that I think the
> > > MeshNumbering approach is a good solution.
> > >
> > > >> (When I think of it, MeshNumbering may not be a good choice of name
> > > >> for the new class, since it may be confused with MeshOrdering which
> > > >> does something different but related.)
> > > >>
> > > >> > Change mesh partitioning library to ParMETIS. Modify the
> > > >> > partitioning class to work on distributed data, add the necessary
> > > >> > calls to METIS and redistribute the local vertices/cells according
> > > >> > to the result. Since METIS could partition a mesh directly using an
> > > >> > internal mesh to graph translation it is possible to have
> > > >> > partitioning directly in the MeshPartition class. However both
> > > >> > methods could easily be implemented and compared against each other.
> > > >>
> > > >> We don't want to change from SCOTCH to ParMETIS, but we could add
> > > >> support for using METIS/ParMETIS as an option.
> > >
> > > Yes, that is right. The implementation for the basic algorithms should
> > > look more or less the same for Scotch and parMetis, and then we could add
> > > extra optional functionality based on parMetis until we can find
> > > corresponding functionality in Scotch.
> > >
> > > > Have you thought about generalizing the partitioning to hypergraphs? I
> > > > just
> > > > did this so I can partition faces (for FVM) and it was not that bad. I
> > > > use Zoltan
> > > > from Sandia.
> > >
> > > Sounds interesting. I do not know about this, but it may be worth to look at.
> > >
> > > >> > Finish implementation of mesh communication class MPIMeshCommunicator.
> > > >> > Add functionality for single vertex and cell communication needed
> > > >> > for mesh partitioning.
> > > >>
> > > >> What do you mean by single vertex and cell communication? Also note
> > > >> that it is not enough to communicate indices for vertices and
> > > >> cells. Sometimes we also need to communicate edges and faces.
> > > >
> > > > That is why you should never explicitly refer to vertices and cells, but
> > > > rather
> > > > communicate that entire closure and star of each element which you send.
> > > > That is the point of the mesh structure, to avoid this kind of special
> > > > purpose
> > > > coding.
> > > >
> > > >> > Adapt boundary calculation to work on distributed meshes. Use
> > > >> > knowledge about which vertices are shared among processors to decide
> > > >> > if an edge is global or local. Implement the logic directly in
> > > >> > BoundaryComputation class using information from the mesh
> > > >> > partitioning.
> > > >>
> > > >> I'm not sure I understand this point.
> > >
> > > If you only have a part of the mesh, as is the case for a fully
> > > distributed mesh, then you need to know if your Boundary (of the local
> > > Mesh) is a internal boundary, which should communicate with its
> > > neighboring partitions, or an external boundary for which you apply
> > > boundary conditions.
> > >
>
> > > >> > Modify Assembly process with a mapping function which maps dof_maps
> > > >> > indices from local global prior to updating the global tensor.
> > > >> > Implement the call in class Assembler using functions from the Mesh
> > > >> > class.
> > > >>
> > > >> It might be enough to modify UFCCell::update().
> > >
> > > Ok. Sounds good.
> > >
> > > >> > Change PETSc data types to MPI (PETScMatrix,PETScVector).
> > > >> > Change PETSc solver environment to use the correct MPI communicator
> > > >> >  (All PETSc solver classes).
> > > >>
> > > >> We need to determine whether to use MPI or Seq PETSc types depending
> > > >> on whether we are running in parallel.
> > > >
> > > > We have types for this like AIJ and the default Vec.
> > >
> > > Sounds good.
> > >
> > > /Johan
> > >
> > >
> > >
> > > >
> > > >    Matt
> > > >
> > > >>
> > > >
> > > >
> > > >
> > > >
> > >
> > >
> > >
> >
> >
> >
> _______________________________________________
> DOLFIN-dev mailing list
> DOLFIN-dev@xxxxxxxxxx
> http://www.fenics.org/mailman/listinfo/dolfin-dev
>



-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which
their experiments lead.
-- Norbert Wiener


References