← Back to team overview

dolfin team mailing list archive

Re: [HG dolfin] merge

 

On 12/8/06, Garth N. Wells <g.n.wells@xxxxxxxxxx> wrote:
Martin Sandve Alnæs wrote:
> On 12/7/06, Garth N. Wells <g.n.wells@xxxxxxxxxx> wrote:
>>
>>
>> Martin Sandve Alnæs wrote:
>
> Yes, but that's a separate issue.
>
> We're probably just talking past each other here.
>
> After looking at your code again, I think the misunderstanding is in this:
> (I don't know PetSC, so I might be wrong here)
>
> You call VecCreateMPI(..., n_local, n_global, &vec), which probably
> distributes the global vector entries in contiguous chunks to
> processes. If you didn't renumber, I now see your "problem".
>
> I've used an Epetra_Map(..., array_of_local_entries, n_local) which
> distributes the global vector entries with a completely general
> local-to-global mapping.

I see now how Epetra works. Epetra_Map creates a mapping between global
and local indices. Do you create this, or does Epetra create it for you
on the fly as you insert terms? I'm building this map manually.

The algorithm I posted earlier builds "set<int> index_set" from a cell
partition and a ufc::dof_map. I insert the contents of this set (the
set of all global dofs touched by the cell partition) into an integer
array which I pass to the Epetra_Map constructor. Then Epetra_Map
figures out the rest.

When you have things running with Epetra and we have things up and
running with PETSc, it would be nice to compare the performance of the
two. Looks like Epetra has some nice functionality.

We should do that.

martin


References