← Back to team overview

dolfin team mailing list archive

Re: [HG dolfin] merge

 

Martin Sandve Alnæs wrote:
On 12/7/06, Garth N. Wells <g.n.wells@xxxxxxxxxx> wrote:


Martin Sandve Alnæs wrote:

Yes, but that's a separate issue.

We're probably just talking past each other here.

After looking at your code again, I think the misunderstanding is in this:
(I don't know PetSC, so I might be wrong here)

You call VecCreateMPI(..., n_local, n_global, &vec), which probably
distributes the global vector entries in contiguous chunks to
processes. If you didn't renumber, I now see your "problem".

I've used an Epetra_Map(..., array_of_local_entries, n_local) which
distributes the global vector entries with a completely general
local-to-global mapping.

I see now how Epetra works. Epetra_Map creates a mapping between global and local indices. Do you create this, or does Epetra create it for you on the fly as you insert terms? I'm building this map manually.

When you have things running with Epetra and we have things up and running with PETSc, it would be nice to compare the performance of the two. Looks like Epetra has some nice functionality.

Garth

But the local Epetra_Vector will store its
local entries in a contiguous array, so there is an implicit
renumbering here. But it is not a renumbering of the global dofs, it
is a separate renumbering or mapping between local and global vector
indices. With this approach, there is no connection between the
numbering of the global dofs and the communication amount.

martin





Follow ups

References