← Back to team overview

dolfin team mailing list archive

Re: Memory access error in DenseMatrix::invert()

 

On Sun, 2006-05-07 at 23:39 +0200, Anders Logg wrote:
> On Sun, May 07, 2006 at 07:17:45PM +0200, Garth N. Wells wrote:
> > On Sun, 2006-05-07 at 17:18 +0200, Anders Logg wrote:
> > > On Sun, May 07, 2006 at 03:59:09PM +0200, Garth N. Wells wrote:
> > > > On Sun, 2006-05-07 at 14:10 +0200, Anders Logg wrote:
> > > > > When adding the option to compile without PETSc, I took the
> > > > > opportunity change the precomputation of quadrature weights for the
> > > > > ODE solvers to use DenseMatrix instead of the standard PETSc
> > > > > Matrix. (The matrices are small and dense anyway.)
> > > > > 
> > > > > It seems that DenseMatrix::invert() sometimes gives unexpected results
> > > > > and the reason seems to be a memory access error somewhere in ublas. I
> > > > > don't know if this is a problem with ublas or the way we use
> > > > > it. Garth, could you look again at DenseMatrix::invert() and see if
> > > > > you can spot something obvious?
> > > > > 
> > > > > I have attached the error log from running valgrind and here is the code that
> > > > > reproduces the error:
> > > > > 
> > > > >    DenseMatrix A(3, 3);
> > > > >    A(0, 0) = A(1, 1) = A(2, 2) = 1.0;
> > > > >    A.invert();
> > > > > 
> > > > 
> > > > When you create a matrix, it is not initialised to zero by Boost. You
> > > > can do
> > > > 
> > > >     A.clear(); // the Boost function
> > > > 
> > > > or
> > > 
> > > Thanks| Works perfectly now. I added clear() to DenseMatrix and
> > > DenseVector because I don't see a point in not initializing a dense
> > > matrix. It's probably a very small additional overhead and so much
> > > more user friendly.
> > > 
> > 
> > Have you done any benchmarks? If you configure with
> > --enable-optimization, you'll see about a factor 10 speed-up with
> > DenseMatrix and DenseVector.
> 
> You mean compared to Matrix/Vector? Or do you mean the initialization
> to zeros?
> 

No, I mean DenseMatrix/Vector versus DesnseMatrix/Vector. If you do
"./configure --enable-optimization" the flag -DNDEBUG is used which
turns off a lot of internal uBlas checks. This gives the significant
speed-up. 

> > > >     A = 0; // DOLFIN wrapper for A.clear() (provides compatibility with
> > > > Matrix, but I prefer A.clear() because A = x only works for x = zero.)
> > > 
> > > Assignment to a scalar should work for any value of the scalar for a
> > > dense matrix. It's only for the sparse where we need the special case.
> > > (Then we don't want to initialize all elements as nonzero.)
> > > 
> > 
> > I agree. I would like to remove the function "operator= real 0" from
> > Matrix and use something else to imply initialisation. clear() is good,
> > but I recall that it's used for something else in Matrix. I want to keep
> > the meaning of functions that are common to DenseMatrix and DenseVector
> > identical. 
> 
> It looks like we use it in Vector but not in Matrix to clear all
> entries (remove them). The same name is used in std::vector and
> std::list to remove elements, so boost and STL seem to differ in the
> meaning of "clear". I'm not sure what we should use, but I think
> 
>     A = 0.0;
> 
> looks nice for setting all elements to zero in a sparse matrix.
> 
> > > > I left it up to the user to initialise the matrix because often all
> > > > values will be set by the user, so initialisation is just extra overhead
> > > > from some problems.
> > > 
> > > I would argue that we put in the clear() automatically. There's a
> > > trade-off of course, but if you don't make any decisions at all for
> > > the user you end up with PETSc and a lot of overhead in writing the
> > > code.
> > 
> > Is that a veiled comment on PETSc? :-).
> 
> PETSc is really good but I really don't like its user interface.
> 
> > On this point (protecting the user), what if we remove or make
> > protected the functions that provide access to individual elements
> > of Matrix and Vector. The problem when setting values is that
> > 
> >   MatAssemblyBegin(A, MAT_FINAL_ASSEMBLY);
> >   MatAssemblyEnd(A, MAT_FINAL_ASSEMBLY);
> > 
> > are called every time and element is set which is really slow. PETSc is
> > not designed for element-by-element access.
> 
> We could do that, but I'm not sure we should remove a natural
> operation just because it is slow. It may be good to have for testing
> at least.
> 

My reason for removing it would be that PETSc is not designed to do it.
It is useful to have -  I would like to make it "difficult" to use for a
user that shouldn't be using it. Perhaps some warning comments in the
code and eventually in the manual will be enough.

> One thing we could do would be to put in a warning the first time it
> is used. The Event class is good for this:
> 
>     class Matrix
>     {
>       ...
>     private:
> 
>       Event access_slow;
> 
>     };
> 
>     Matrix::Matrix(...) : access_slow("Element-wise access of sparse
>                                        matrices is slow, consider using...");
> 
> and then just do
> 
>     access_slow();
> 
> in Matrix::operator().
> 
> Events are just printed once (or as many times as specified). The
> overhead of calling the event() is very small and shouldn't matter
> since the access is slow anyway.
> 
> > Another point - now that we have a Boost dependency, how about using
> > more of its features? It has some nice smart pointer classes so we could
> > get rid of a lot of "new" and "delete", would be useful for
> > sub-functions that share a vector. Also, uBlas has classes "c_vector"
> > and "c_matrix" in which the data is stored in a regular C++ array and
> > provides vector functionality (the maximum size must be known at compile
> > time). It has very low overhead compared to a regular array. This could
> > be useful in BilinearForm to have a nice-looking matrix-like
> > representation and, as Johan J suggested before, could be used for the
> > affine maps.
> 
> I'm all for using boost. I haven't used it much myself so I'm not very
> familiar with it, but let's see what we can use.
> 
> About the matrix classes again, what would the overhead be of not
> making GenericMatrix a template? The only functions we call during
> assembly (the templated functions in FEM.h) are
> 
>     GenericMatrix::init()
>     GenericMatrix::add() 
>     GenericMatrix::apply()
>     GenericMatrix::size()
>     GenericMatrix::ident()
>     GenericMatrix::nzmax()
> 
> The only one of these called more than once or twice is add() and the
> cost of adding the values is (I guess) significant compared to the
> run-time resolution of the virtual function.
> 

For GenericMatrix::add(), this is true for problems with larger
Jacobians and RHS vectors. For low-order Poisson for example, there will
be some overhead.  

We can try this. Before making the change, it would nice to benchmark
the assembly.

> Note that I don't suggest the whole envelope-letter design, just
> removing the templating of GenericMatrix. This means no duplication
> (only collecting the common functions in GenericMatrix as today) and
> simpler implementation (removing the templating).
>
> /Anders
> 
> > 
> > 
> > Garth
> > 
> > > 
> > > /Anders
> > > 
> > > 
> > > > Garth
> > > > 
> > > > > How about adding solve(DenseVector& x, const DenseVector& b) to DenseMatrix?
> > > > > 
> > > > > /Anders
> > > > > _______________________________________________
> > > > > DOLFIN-dev mailing list
> > > > > DOLFIN-dev@xxxxxxxxxx
> > > > > http://www.fenics.org/cgi-bin/mailman/listinfo/dolfin-dev
> > > 
> > > _______________________________________________
> > > DOLFIN-dev mailing list
> > > DOLFIN-dev@xxxxxxxxxx
> > > http://www.fenics.org/cgi-bin/mailman/listinfo/dolfin-dev
> 
> _______________________________________________
> DOLFIN-dev mailing list
> DOLFIN-dev@xxxxxxxxxx
> http://www.fenics.org/cgi-bin/mailman/listinfo/dolfin-dev




Follow ups

References