dolfin team mailing list archive
-
dolfin team
-
Mailing list archive
-
Message #13582
Re: Recent changes
On Fri, May 15, 2009 at 01:37:52PM +0100, Garth N. Wells wrote:
>
>
> Anders Logg wrote:
> > On Fri, May 15, 2009 at 01:06:44PM +0100, Garth N. Wells wrote:
> >>
> >> Martin Sandve Alnæs wrote:
> >>> On Fri, May 15, 2009 at 1:37 PM, Anders Logg <logg@xxxxxxxxx> wrote:
> >>>> There have been some big changes to the code lately. Here's a summary:
> >>>>
> >>>> 1. We now use the wrappers module in dolfin_utils to generate the
> >>>> DOLFIN wrapper code (in both FFC and SFC). This module generates
> >>>> slightly different code to the old FFC wrappers. Most notably,
> >>>> typedefs are used to avoid code duplication and classes/namespaces are
> >>>> now nested. For application code, that means one must change from
> >>>>
> >>>> PrefixBilinearForm
> >>>> PrefixLinearForm
> >>>> PrefixTestSpace
> >>>> PrefixTrialSpace
> >>>> to
> >>>>
> >>>> Prefix::BilinearForm
> >>>> Prefix::LinearForm
> >>>> Prefix::BilinearForm::TestSpace
> >>>> Prefix::BilinearForm::TrialSpace
> >>>>
> >>>> etc.
> >>>>
> >>>> If all test and trial spaces are equal, then a common class named
> >>>>
> >>>> Prefix::FunctionSpace
> >>>>
> >>>> will be created (a typedef).
> >>>>
> >>>> Some of you may remember that we've had this interface before, but
> >>>> then had to remove it due to problems with SWIG. Now that SWIG just
> >>>> looks at the pure UFC code it's not a problem anymore.
> >>> Also note that forms and function spaces for coefficients are available by name:
> >>>
> >>> a = c*u*v*dx
> >>> L = f*v*dx
> >>> ->
> >>> Prefix::Form_a
> >>> Prefix::Form_L
> >>> Prefix::Form_a::CoefficientSpace_c
> >>> Prefix::Form_a::CoefficientSpace_L
> >>>
> >> This is very useful and a nice improvement.
> >>
> >>> and coefficients can be set easily using
> >>>
> >>> Prefix::CoefficientSet coeffs;
> >>> coeffs.c = my_function_c;
> >>> coeffs.f = my_function_f;
> >>> Prefix::Form_a a(V, V, coeffs);
> >>> Prefix::Form_L L(V, coeffs);
> >>>
> >>> which avoids duplication of lines like "my_form.f = my_function_f;"
> >>> for coefficients shared by multiple forms.
> >>>
> >> Nice.
> >>
> >>
> >>>> 2. Initialization of mesh entities now happens in the constructor of
> >>>> DofMap. The initialization happens automitcally only if the new
> >>>> non-const (wrt Mesh) constructor of FunctionSpace is used. If the
> >>>> const version is used, then an error message is given. So if you solve
> >>>> something with P2 elements and need the edges, these must either first
> >>>> be generated using mesh.init(1) or the non-const constructor must be
> >>>> used. Most demos should remain unchanged.
> >>>>
> >>>> 3. When running in parallel not only will the entities be generated,
> >>>> but they will also be numbered globally. So all mesh entities have a
> >>>> unique global index accessible in the data section of a mesh (named
> >>>> "global entity indices %d"). The global dof map should also be
> >>>> computed correctly now but I haven't checked. This means we may in
> >>>> principle assemble in parallel now. But probably not in
> >>>> practice... :-)
> >>>
> >>> Great :)
> >>>
> >> Very good. What's the situation with mesh i/o? Is is serial?
> >
> > No, that's also parallel (at least the i part). Here's a quick summary:
> >
> > 1. The mesh is read in parallel, each process reading data into LocalMeshData.
> >
> > 2. The cells are partitioned in parallel using ParMETIS (we can add an
> > interface to SCOTCH later).
> >
> > 3. The cells are redistributed among the processes according to the
> > partition.
> >
> > 4. The vertices are redistributed among the processes according to the
> > cell distribution.
> >
>
> Are facets also distributed (for DG)?
Yes, but DG won't work until later.
The mesh is partitioned such that processes may share faces (and
vertices and edges) but never cells. So the faces will be numbered
correctly for DG, but for some interior faces (the ones on the
boundary to a neighboring partition), one of its neighboring cells
will be missing so the assembler will treat it as an exterior face.
This can be worked around by distributing the mesh with an overlap
(cells shared along partitioned boundaries) and indicators for which
process should assemble over those cells.
--
Anders
> > 5. Each process then sits with its own Mesh (just a standard DOLFIN
> > mesh) for its part of the domain. The data for parallel computation is
> > stored as auxiliary data in the data section:
> >
> > "num global entities"
> > "global_entity_indices 0"
> > "global_entity_indices 1"
> > ...
> > "overlap"
> >
> > 6. Mesh entities are created in the DofMap constructor and then
> > numbered globally.
> >
> > 7. UFCCell is parallel aware and fills in the ufc::cell data using the
> > global entity indices (from the mesh data section) rather than the
> > local entity indices (from the mesh topology) when running in
> > parallel.
> >
> > 8. Assembly should then essentially work. But note that the dof map is
> > not very efficient since it may be spread all over the place. To make
> > it efficient, we need to renumber the dofs. Having unique global
> > entity indices guarantees a correct dof map, but it may not be
> > optimally ordered. So the reordering needs to be (re)implemented to
> > make it efficient. We also need to look over the initialization of
> > sparsity patterns and parallel matrices.
> >
>
> OK.
>
> > Most things are handled by the class MeshPartitioning.
> >
>
> All sounds good. It would be nice to get rid of PVTKFile at some point
> and have VTKFile take care of parallel and serial VTK output.
>
> Garth
>
> >
> >
> > ------------------------------------------------------------------------
> >
> > _______________________________________________
> > DOLFIN-dev mailing list
> > DOLFIN-dev@xxxxxxxxxx
> > http://www.fenics.org/mailman/listinfo/dolfin-dev
> _______________________________________________
> DOLFIN-dev mailing list
> DOLFIN-dev@xxxxxxxxxx
> http://www.fenics.org/mailman/listinfo/dolfin-dev
Attachment:
signature.asc
Description: Digital signature
References