← Back to team overview

dolfin team mailing list archive

Re: [HG dolfin] Implement assembly over interior facets. The result seems to

 

Quoting Anders Logg <logg@xxxxxxxxx>:

> On Tue, Mar 13, 2007 at 05:03:14PM +0100, Garth N. Wells wrote:
> > 
> > 
> > Martin Sandve Alnæs wrote:
> > > 2007/3/12, Anders Logg <logg@xxxxxxxxx>:
> > >> On Sat, Mar 10, 2007 at 04:23:55PM +0100, Martin Sandve Alnæs wrote:
> > >>> 2007/3/9, Garth N. Wells <g.n.wells@xxxxxxxxxx>:
> > >>>> Anders Logg wrote:
> > >>>>> On Fri, Mar 09, 2007 at 07:10:31PM +0100, Garth N. Wells wrote:
> > >>>>>> Anders Logg wrote:
> > >>>>>>> On Fri, Mar 09, 2007 at 03:07:43PM +0100, Garth N. Wells wrote:
> > >>>>>>>> Why do we need a class AssemblyMatrix?
> GenericMatrix/GenericVector
> > >>>>>>>> should provide the necessary  virtual functions for assembly.
> > >>>>>>>>
> > >>>>>>>> Garth
> > >>>>>>> Yes, definitely. The assembler assumes it gets a GenericTensor
> that
> > >>>>>>> has a function
> > >>>>>>>
> > >>>>>>>     void add(real* block, uint* size, uint** entries)
> > >>>>>>>
> > >>>>>>> (entries is probably a bad choice of name here, let's change it)
> > >>>>>>> This makes it possible to implement just one assembly function and
> > >>>>>>> keep the Assembler class small.
> > >>>>>>>
> > >>>>>> Then GenericVector and GenericMatrix should then derive from
> GenericTensor?
> > >>>>> Yes. So far it is just two functions that are needed, init() and
> add().
> > >>>>>
> > >>>>>>> Anyway, the class AssemblyMatrix is just a temporary implementation
> of
> > >>>>>>> something that implements the GenericTensor interface. My
> suggestion
> > >>>>>>> is that we overload the add() function above in both GenericMatrix
> and
> > >>>>>>> GenericVector. The overloaded functions will map to the correct
> add()
> > >>>>>>> function in the respective classes. For GenericMatrix, we'll have
> > >>>>>>>
> > >>>>>>>     void add(real* block, uint* size, uint** entries)
> > >>>>>>>     {
> > >>>>>>>         add(block, size[0], entries[0], size[1], entries[1]);
> > >>>>>>>     }
> > >>>>>>>
> > >>>>>>> So, even if AssemblyMatrix could be of some use (it seems to be a
> fast
> > >>>>>>> option for assembly, the STL vector of maps beat PETSc in at least
> one
> > >>>>>>> benchmark I made)
> > >>>>>> Don't draw conclusions too quickly. STL vector of maps is fast to
> > >>>>>> assemble, but is close to useless for linear algebra. The current
> > >>>>>> implementation for the assembly uBLAS matrices use something like
> STL
> > >>>>>> vector of maps  (fast to assemble but slow for linear algebra) and
> > >>>>>> converts it to compressed row storage (fast for linear algebra).
> The
> > >>>>>> conversion can take some time. A STL vector of integer maps should
> be
> > >>>>>> created and then used to initialise a particular sparse storage
> format
> > >>>>>> (before first use only). Then we wouldn't have to convert matrix
> formats.
> > >>>>> Do you mean this should be done in DofMap?
> > >>>>>
> > >>>> Yes. The STL vector of maps is cheap to construct, and can be used to
> > >>>> initialise the non-zero layout of a sparse matrix.
> > >>>>
> > >>> You should use vector<set<int>>, not vector<map<int>>, if the goal is
> > >>> only to build the sparsity pattern, as this gives lower memory
> > >>> overhead since the set is a map with no value.
> > >> Yes, we use vector<set<int>> for the sparsity pattern, but in the
> > >> class AssemblyMatrix (which is something I use temporarily until we
> > >> make GenericMatrix and GenericTensor subclasses of GenericTensor) I
> > >> have used vector<map<int>> to represent the sparse matrix.
> > >>
> > >>> Anders: I have an implementation of this in
> > >>> pycc/MatSparse/CRSGraph.cpp and pycc/FEMAssembler/CRSAssembler.cpp,
> > >>> feel free to use any suitable parts of it.
> > >> We (Garth) has an implementation of this in the class OldDofMap in
> > >> DOLFIN that should be merged with the "new" class DofMap for UFC-based
> > >> assembly. It looks very similar to what you have in
> > >> CRSAssembler/CRSGraph:
> > >>
> > >> for (CellIterator cell(*mesh); !cell.end(); ++cell)
> > >> {
> > >>   element[0]->nodemap(row_dof, *cell, *mesh);
> > >>   for (uint i = 0; i < element[0]->spacedim(); ++i)
> > >>   {
> > >>     element[1]->nodemap(column_dof, *cell, *mesh);
> > >>     for (uint j = 0; j < element[1]->spacedim(); ++j)
> > >>       matrix_sparsity_pattern[ row_dof[i] ].insert( column_dof[j] );
> > >>   }
> > >> }
> > >>
> > >> Here, matrix_sparsity_pattern is a vector<set<int>> and the nodemap
> > >> function is the old (non-UFC) name of the tabulate_dofs() function.
> > >>
> > >> Is this the same as your implementation? (Other than splitting the
> > >> insertion into a separate call insert_block()?)
> > > 
> > > Looks pretty much the same.
> > > 
> > > Another thing to consider, is a GenericSparsityPattern class with
> > > insert(...) similar to GenericMatrix::add(...). F.ex. Epetra has an
> > > insertion function like this in Epetra_CRSGraph, and other matrix
> > > libraries may have one as well. Then maybe your
> > > GenericMatrix::init(m,n) can be replaced/complemented by a
> > > GenericMatrix::init(GenericSparsityPattern &).
> > >
> > 
> > I think that we'll end up with something like this.
> > 
> > Garth
> 
> Yes, this sounds like a good idea.
> 
> Garth, do we want to split the DofMap functionality into DofMap
> and SparsityPattern? A DofMap could compute or return a
> SparsityPattern. It seems more natural to initialize a tensor (matrix)
> with a SparsityPattern than by a DofMap.
>

Yes. DofMap could return a SparsityPattern, which would be used to initialise a
sparse matrix.
 
Garth

> Martin, why the name *Generic*SparsityPattern? Do you have subclasses
> of GenericSparsityPattern?
> 
> /Anders
> 


References