← Back to team overview

dolfin team mailing list archive

Re: [Bug 706909] [NEW] [python] matrix multiplication fails in parallel

 


On 25/01/11 14:56, Joachim Berdal Haga wrote:
> 
> 
> On 24 January 2011 18:33, Garth N. Wells <gnw20@xxxxxxxxx
> <mailto:gnw20@xxxxxxxxx>> wrote:
> 
>     On 24/01/11 17:17, Johan Hake wrote:
>     > Isn't there a way to initialize a paralell vector with arbitrary
>     distribution
>     > pattern?
>     >
>     > There is a way to general a local range from a given entity size using
>     > MPI.local_range. One should be able to generate a distributed
>     Vector with this
>     > information.
>     >
> 
>     This isn't guaranteed to work since using MPI.local_range may produce a
>     vector with a layout that is inconsistent with the matrix layout.
> 
>     A Vector could be created using the local range data from the Matrix.
> 
> 
> It seems that this is only implemented for rows, not for columns. 

Do you mean GenericMatrix::local_range? It takes a dim argument,

  std::pair<uint, uint> GenericMatrix::local_range(uint dim);

Garth


> Any
> idea how best to create a distributed vector suitable for the result of
> a transpmult? Or is transpmult unsupported in parallel (it's probably
> not very efficient)?
> 
> (Taking PETSc for example, it looks like using either
> sparsity_pattern.local_range(1), MPI::local_range(N) or
> PETSc::MatGetLocalSize.n for the local size might work. Or creating the
> vector with specified global size and PETSC_DECIDE for the local size,
> although this is a bit of a wart compared to the current *Vector
> interface. But which to choose... and which are consistent with the
> matrix layout?)
>
> -j.
> 



Follow ups

References