← Back to team overview

dolfin team mailing list archive

Re: Linear algebra

 

I agree that the assembly should be accessing the low-level API of
whatever package we choose. But I still think B5 is important for
DOLFIN. When we start moving functionality out of DOLFIN (finite
elements to FIAT, form evaluation to FFC, linear algebra to ...),
DOLFIN will just be an interface and then B5 is everything that
matters.

Concerning the choice of package, PETSc might be the right choice
after all, in particular if the C++ interface is on its way (see
http://www-unix.mcs.anl.gov/petsc/petsc-3/).

/Anders

On Sun, Oct 24, 2004 at 07:42:54PM -0500, Robert Kirby wrote:
> I will reply to the later discussion later.  But let me respond to the 
> criteria Anders lists
> 
> On Oct 24, 2004, at 11:59 AM, Anders Logg wrote:
> 
> >It's about time we came to a decision on the linear algebra of DOLFIN
> >(or FEniCS). This is not a poll, but your opinion might help us make
> >the correct decision.
> >
> >The current linear algebra of DOLFIN was implemented with simplicity
> >in mind, but it's not parallel and I don't think we have the time and
> >resources to maintain a complete linear algebra package of our own.
> >
> >There are a couple of different options we could consider:
> >
> >A1. Use an existing linear algebra package (see list below) as it is.
> >
> >A2. Use an existing linear algebra package, but create the appropriate
> >wrappers in DOLFIN (could just be a list of typedefs).
> >
> >A3. Develop and maintain an existing linear algebra package as a FEniCS
> >project.
> >
> 
> A1 is preferable, with A2 a distant second.  I see no circumstances 
> under which I would support A3.
> 
> 
> I completely agree with B1 through B4.  However, I take exception to B5 
> for the following reasons:
> 
> 1.) Performance -- especially at the granularity of setting matrix 
> values, we want a low-level API to some C library -- hand it a double * 
> and the int * with indices and let it do the data structures as 
> efficiently as it can.
> 2.) Why not C++?  First of all, PETSc (e.g.) is not *that* ugly, and 
> besides, our goal is to generate the fastest possible code.  It is easy 
> for us to write code that generates calls to the low-level API (we do 
> it once and we're done).  Performance is tantamount here, and 
> generating C++ out of FFC will almost always lead to slower code.  We 
> don't have to settle for this.
> 3.) Of course, in the programming environment, we want some kind of 
> high-level handles for the matrices generated.  However, this can 
> easily be accomplished by wrappers.  So, if we were using PETSc (for 
> example), we code to the lower-level API when we generate code and 
> allow application programmers to use the high-level interface (since 
> they don't care about particular matrix entries anyway) to pass 
> matrices to solvers in C++ or Python.  It's all about granularity.
> 
> 
> >Whatever choice we make, the linear algebra package should satisfy the
> >following criteria:
> >
> >B1. It should be open-source.
> 
> >B2. It should be standard or close to standard.
> >
> 
> >B3. It should be actively maintained.
> >
> >B4. It should be parallel.
> >
> 
> 
> 
> >B5. It should have a nice API (C++ style).
> >
> >Here's a list of available options. The list is not complete, but
> >it's a start:
> >
> >1. PETSc: http://www-unix.mcs.anl.gov/petsc/petsc-2/
> >
> >Pros: Satisfies B1-4
> >Cons: Does not seem to satisfy B5. Maybe a future version of PETSc 
> >will?
> >
> >2. MTL: http://www.osl.iu.edu/research/mtl/
> >
> >Pros: Satisfies B1 (?) and B5
> >Cons: Does not satisfy B2-4?
> >
> >3. uBLAS: http://www.boost.org/libs/numeric/ublas/doc/index.htm
> >
> >Pros: Satisfies B1-3, B5
> >Cons: Does not satisfy B4
> >
> >4. POOMA: http://www.codesourcery.com/pooma/
> >
> >Pros: ?
> >Cons: ?
> >
> >5. Sparselib++: http://math.nist.gov/sparselib++/
> >
> >Pros: ?
> >Cons: ?
> >
> >6. TNT: http://math.nist.gov/tnt/
> >
> >Pros: ?
> >Cons: ?
> >
> >7. Blitz++: http://www.oonumerics.org/blitz/
> >
> >Pros: ?
> >Cons: ?
> >
> >/Anders
> >
> 
> 



References