← Back to team overview

dolfin team mailing list archive

Re: array(), getRow() etc

 

On Fri, 2006-06-30 at 12:29 +0200, Anders Logg wrote:
> On Fri, Jun 30, 2006 at 08:48:59AM +0200, Anders Logg wrote:
> > On Fri, Jun 30, 2006 at 12:32:25AM +0200, Garth N. Wells wrote:
> > > On Thu, 2006-06-29 at 20:20 +0200, Anders Logg wrote: 
> > > > I've started trying to remove the PETSc ifdefs from the ODE solvers,
> > > > but ran into some problems. The ODE solver uses the calls
> > > > 
> > > >     Vector x;
> > > >     real* xx = x.array();
> > > >     // operate on xx
> > > >     x.restore(xx);
> > > > 
> > > > This is supported only by PETSc vectors.
> > > > 
> > > > There are a couple of different solutions:
> > > > 
> > > > 1. Don't use Vector::array().
> > > > 
> > > > 2. Add Vector::array() to uBlas vectors.
> > > > 
> > > > 3. Do 2 and add it to GenericVector as well.
> > > > 
> > > > Any thoughts? I have a feeling 1 is the best solution, but it's also
> > > > the one that requires the most work (for me to rewrite the ODE
> > > > solvers).
> > > > 
> > > 
> > > I think that option 1 is best. I don't like options 2 & 3 as they take
> > > away the advantages of using uBlas. What exactly do you do with the
> > > Vector::array() in ode?
> > 
> > The solution is (for some of the solvers) stored as a Vector u, but when
> > calling the user-supplied right-hand side f(u, t), the argument is
> > given as a plain C array since the user will often want to do a lot of
> > indexing of the vector in the definition of the right-hand side and I
> > suspect indexing is slow for PETSc vectors.
> > 
> > > How fast are the PETSc functions VecGetValues
> > > and VecSetValue, and how much time is involved in calling
> > > VecAssemblyBegin() and VecAssemblyEnd() (which are currently called
> > > after every set)? I use Vector::array in the Cahn-Hilliard solver to do
> > > things like take the log of each term. Not sure how to do this
> > > efficiently without using array which will work for both PETSc and uBlas
> > > types. 
> > 
> > I'll make a small test to see how slow it really is.
> 
> Here are some results. The timings are for setting and then accessing
> all elements in a vector of size 10000:
> 
> Setting values in PETSc vector:   0.0708
> Accessing values in PETSc vector: 0.00031
> Setting values in uBlas vector:   1.42e-05
> Accessing values in uBlas vector: 4.6e-06
> Setting values in STL vector:     1.41e-05
> Accessing values in STL vector:   4.7e-06
> Setting values in C array:        1.4e-05
> Accessing values in C array:      1.41e-05
> 
> Some conclusions:
> 
>  - Setting and accessing values element-wise in a PETSc array
>    is terribly slow
>
>  - uBlas, STL vector and C array are equally fast, don't know
>    why access for C array is slower
> 

When I was first testing uBlas I saw that C arrays were often a bit
slower. I didn't understand why either.


> It seems the natural is to modify the interface for the ODE solver from
> 
>     void f(const real u[], real t, real y[]);
> 
> to
> 
>     void f(const DenseVector& u, real t, DenseVector& y);
> 
> Since the ODE solvers are currently not parallel anyway, I might as
> well use DenseVector in combination with the uBlas solvers. This
> should speed up the ODE solvers for small systems as well.
> 

For the future, it might be possible to create a uBlas vector that
addresses the array returned by PETScVector::array(). It's important
that we find a solution for manipulating array elements efficiently and
cleanly.

> Garth, we need to extend the new uBlas solvers you added so that they
> work with a VirtualMatrix, that is, something which provides the
> function
> 
>     void mult(const DenseVector& x, DenseVector& y) const;
> 
> Would that be much work to add?
> 

Should be OK.

Garth

> The ODE solvers don't create a matrix, only the product of the matrix
> with a vector which should be enough for GMRES.
> 
> /Anders
> _______________________________________________
> DOLFIN-dev mailing list
> DOLFIN-dev@xxxxxxxxxx
> http://www.fenics.org/mailman/listinfo/dolfin-dev



References