← Back to team overview

ffc team mailing list archive

Re: [DOLFIN-dev] dof locations

 

Martin Sandve Alnæs wrote:
On 12/4/06, Garth N. Wells <g.n.wells@xxxxxxxxxx> wrote:


Martin Sandve Alnæs wrote:
> On 12/4/06, Garth N. Wells <g.n.wells@xxxxxxxxxx> wrote:
>>
>>
>> Martin Sandve Alnæs wrote:
>> > On 12/4/06, Garth N. Wells <g.n.wells@xxxxxxxxxx> wrote:
>> >> Anders Logg wrote:
>> >>> On Mon, Dec 04, 2006 at 02:14:51PM +0100, Garth N. Wells wrote:
>> >>>> Could we add something to FFC to describe where the various
>> degrees of
>> >>>> freedom live (on vertices, edges, internal)?
>> >>>>
>> >>>> Garth
>> >>> Yes we could, but I'd rather not. Why do we need it? I'd prefer if
>> >>> DOLFIN did not know anything about dofs, other than how to reorder
>> >>> them.
>> >>>
>> >> Not sure that it's this simple if you want to assemble some terms
>> >> block-wise. Also, for parallel assembly we might need to know where
>> dofs
>> >> lie. I'm still thinking about this so I can't be concrete in what's
>> >> needed just yet.
>> >
>> > I've just done some work on parallell assembly for PyCC, using the UFC
>> > interface. This is basically how I make the node partition:
>> >
>> > // fill index_set with all global dofs used by elements in the local
>> > grid partition
>> >   std::set<int> index_set;
>> >
>> >   // iterate over all mesh cells in local mesh partition
>> > pycc::UfcCellIterator *iter = mesh_iter_fac.create_cell_iterator();
>> >   for(; !iter->end(); iter->next())
>> >   {
>> >     const ufc::cell & ufc_cell = iter->get_cell();
>> >
>> >     // get loc2glob from nm
>> >     ufc_node_map.tabulate_nodes(rows, ufc_mesh, ufc_cell);
>> >
>> >     // insert loc2glob entries into index_set
>> >     for(int i=0; i<row_size; i++)
>> >     {
>> >       index_set.insert(rows[i]);
>> >     }
>> >   }
>> >   delete iter;
>> >
>> >
>>
>> I see that this generates a set of degrees of freedom for a given
>> process for a given cell partitioning, but where do you renumber? Do you
>> renumber somewhere so that nodes 0 to m-1 are on processs 1, m to 2m-1
>> on process 2, etc?
>>
>> Garth
>
> I don't, really. I use Epetra in this case, and Epetra_Map handles
> this renumbering. An Epetra_Vector is constructed with a particular
> Epetra_Map, and the vector class has accessor functions for global
> value i or local value i. Epetra_FEVector and Epetra_FECrsMatrix have
> a function SumIntoGlobalValues which I use for the assembly. So in
> this case the local assembly routine actually uses global indices for
> mesh entities.
>

Is this efficient? On a given process, it's likely that you're
assembling entries that "belong" to a sub-matrix that is residing on
another process. Epetra probably takes care of the communication (PETSc
does), but you'll be communicating a lot of values back and forth which
will hurt performance severely. With appropriate renumbering, only terms
on the boundaries will be communicated between processes.

Garth

I don't see where values other than on the boundaries would be
communicated? Only the nodes that are shared between cells in
different mesh partitions (== nodes on the boundary) will be in
different node partitions and thus lead to communication. Or am I
missing something?

The performance is currently horrible because of a horrible mesh
partition, so I can't test this right now. So I don't really know.


The partition is related to this. Entries in a matrix associated with a particular partition are formed on the process to which the mesh partition belongs - no problems there. But you need to make sure (through the dof numbering) that (nearly) all of the terms computed by a process are also stored by that process.

The FFC mapping for vector-valued equations is particularly unsuited to this as two unknowns corresponding to a single node (say x and y components) are located far from each other (the distance is 1/2 of the matrix size).

Garth

martin





Follow ups

References