← Back to team overview

dolfin team mailing list archive

Re: [PETSC #13144] Re: MatMult

 

I went with the simple option of using MatGetRow(), see below. 
(Don't need the extra overhead of the parallel version right now.)

Doing all scalar products by hand one by one instead of calling
MatMult() on all rows at once is about a factor 3.5 slower (and it
seems to get better when I refine the mesh).

By the way, why do you have int and not unsigned int for size types,
column arrays etc? I have to do casts all over the place. (We're also
using STL which uses unsigned int and it propagates throughout the
code.)

/Anders

real Matrix::mult(const Vector& x, uint row) const
{
  // FIXME: Temporary fix (assumes uniprocessor case)

  int ncols = 0;
  const int* cols = 0;
  const double* Avals = 0;
  double* xvals = 0;
  MatGetRow(A, static_cast<int>(row), &ncols, &cols, &Avals);
  VecGetArray(x.x, &xvals);

  real sum = 0.0;
  for (int i = 0; i < ncols; i++)
    sum += Avals[i] * xvals[cols[i]];

  MatRestoreRow(A, static_cast<int>(row), &ncols, &cols, &Avals);
  VecRestoreArray(x.x, &xvals);

  return sum;
}

On Tue, Jun 07, 2005 at 08:37:37AM -0500, Barry Smith wrote:
> 
>    Not again, Barry
> 
> On Tue, 7 Jun 2005, Barry Smith wrote:
> 
> > 
> >   Missed one thing,
> > 
> > >   I disagree. For instance, we have the MatGetLocalMat() function. Further, I
> > > think that this can be generalized. Most parallel formats will have something
> >   ^^^^^^
> > 
> >    I prefer never to generalize until I KNOW it can be generalized.
> 
>    and that the generalization actually buys you something.
> 
>   Barry
> 
> > 
> >    Barry
> > 
> > On Tue, 7 Jun 2005, Matthew Knepley wrote:
> > 
> > > Barry Smith <petsc-maint@xxxxxxxxxxx> writes:
> > > 
> > > >   Matt,
> > > >
> > > >   If you are coding it specifically for MPIAIJ (as you have done) why introduce
> > > > the really ugly MatGetCommunicationStructs() (which is very specific to certain
> > > > matrix formats anyways) just access the scatter info directly and bag that 
> > > > function. 
> > > 
> > >   I disagree. For instance, we have the MatGetLocalMat() function. Further, I
> > > think that this can be generalized. Most parallel formats will have something
> > > like this.
> > > 
> > >         Matt
> > > 
> > > >    Barry
> > > >
> > > >
> > > > On Tue, 7 Jun 2005, Matthew Knepley wrote:
> > > >
> > > >> Anders Logg <logg@xxxxxxxxx> writes:
> > > >> 
> > > >>        Alright, I have made the updates to PETSc. I did not add MatMultRow()
> > > >> since there are so many efficiency implications, that it is best left to the
> > > >> user. Here is my prototype code:
> > > >> 
> > > >> #undef __FUNCT__  
> > > >> #define __FUNCT__ "MatMultRowMPIAIJ"
> > > >> PetscErrorCode MatMultRowMPIAIJ(Mat A,Vec xx,int row,double *yy)
> > > >> {
> > > >>   Vec            lvec;
> > > >> #if defined (PETSC_USE_CTABLE)
> > > >>   PetscTable     colmap;
> > > >> #else
> > > >>   PetscInt      *colmap;
> > > >> #endif
> > > >>   VecScatter     mulScatter;
> > > >>   PetscInt       ncols, lcol, start, end;
> > > >>   PetscInt      *cols;
> > > >>   PetscScalar   *vals, *array, *larray;
> > > >>   PetscInt       nt, i;
> > > >>   PetscErrorCode ierr;
> > > >> 
> > > >>   PetscFunctionBegin;
> > > >>   ierr = VecGetLocalSize(xx,&nt);CHKERRQ(ierr);
> > > >>   if (nt != A->n) {
> > > >>     SETERRQ2(PETSC_ERR_ARG_SIZ,"Incompatible partition of A (%D) and xx (%D)",A->n,nt);
> > > >>   }
> > > >>   ierr = MatGetCommunicationStructs(A, &lvec, &colmap, &multScatter);CHKERRQ(ierr);
> > > >>   ierr = VecScatterBegin(xx,lvec,INSERT_VALUES,SCATTER_FORWARD,multScatter);CHKERRQ(ierr);
> > > >>   ierr = VecScatterEnd(xx,lvec,INSERT_VALUES,SCATTER_FORWARD,multScatter);CHKERRQ(ierr);
> > > >>   ierr = VecGetArray(xx, &array);CHKERRQ(ierr);
> > > >>   ierr = VecGetArray(lvec, &larray);CHKERRQ(ierr);
> > > >>   ierr = MatGetOwnershipRange(A, &start, &end);CHKERRQ(ierr);
> > > >>   ierr = MatGetRow(A, row, &ncols, &cols, &vals);CHKERRQ(ierr);
> > > >>   *yy = 0.0;
> > > >>   for(i = 0; i < ncols; i++) {
> > > >>     if ((col[i] >= start) && (col[i] < end)) {
> > > >>       *yy += vals[i]*array[col[i]-start];
> > > >>     } else {
> > > >> #if defined (PETSC_USE_CTABLE)
> > > >>       ierr = PetscTableFind(colmap,col[i],&lcol);CHKERRQ(ierr);
> > > >> #else
> > > >>       lcol = colmap[col[i]];
> > > >> #endif
> > > >>       *yy += vals[i]*larray[lcol];
> > > >>     }
> > > >>   }
> > > >>   ierr = MatRestoreRow(A, row, &ncols, &cols, &vals);CHKERRQ(ierr);
> > > >>   ierr = VecRestoreArray(xx, &array);CHKERRQ(ierr);
> > > >>   ierr = VecRestoreArray(lvec, &larray);CHKERRQ(ierr);
> > > >>   PetscFunctionReturn(0);
> > > >> }
> > > >> 
> > > >> You just have to pull and rebuild in src/mat.
> > > >> 
> > > >>         Thanks,
> > > >> 
> > > >>                 Matt
> > > >> 
> > > >> > Thanks. Please take a look if you have time since it is something I
> > > >> > really need. (Doing the whole multiply each time I need the value for
> > > >> > one component would cost too much, and I need the different components
> > > >> > at different times.)
> > > >> >
> > > >> > In the meantime, I will make a copy of the values and do the
> > > >> > multiplication by hand.
> > > >> >
> > > >> > /Anders
> > > >> >
> > > >> > On Mon, Jun 06, 2005 at 10:06:23AM -0500, Matthew Knepley wrote:
> > > >> >> Anders Logg <logg@xxxxxxxxx> writes:
> > > >> >> 
> > > >> >> > Matt, how do I compute the scalar product of a row of a matrix and a
> > > >> >> > given vector in PETSc?
> > > >> >> >
> > > >> >> > I need something like
> > > >> >> >
> > > >> >> >     MatMult(Mat A, Vec x, int i, double* yi)
> > > >> >> >
> > > >> >> > where yi = (A*x)_i for my multi-adaptive solver (need to evaluate each
> > > >> >> > component of the right-hand side f individually).
> > > >> >> 
> > > >> >>   I don't think we have anything like that. Of course, you can just pick that
> > > >> >> component out of the result vector, but I am guessing you do not want to do
> > > >> >> the whole multiply. I will look at it, but my guess is that it would involve
> > > >> >> a good bit of rewriting.
> > > >> >> 
> > > >> >>   Thanks,
> > > >> >> 
> > > >> >>         Matt
> > > >> >
> > > >> >
> > > >> >
> > > >> >
> > > >> 
> > > >> 
> > > >
> > > >
> > > >
> > > 
> > > 
> > 
> 

-- 
Anders Logg
Research Assistant Professor
Toyota Technological Institute at Chicago
http://www.tti-c.org/logg/



Follow ups