← Back to team overview

dolfin team mailing list archive

Re: [PyCC-dev] DOLFIN/PyCC merge

 

As I understand it, "Matrix" is just a convenience class for users who
don't care about the underlying format. It should never be used at the
library level (because it ties the library to one format for each
compilation of the library), and can be ignored by anybody who
actually cares about which library they'll use. The gain over a
"typedef PetSCMatrix Matrix;" is that the latter will allow the user
to write code that isn't compilable if you change the typedef to a
f.ex. "typedef EpetraMatrix Matrix;".

But there are a few things that needs to be adressed with respect to
Python usage:

Matrix A;
A = dolfin.Matrix()
A = dolfin.PETScMatrix()
A = dolfin.EpetraMatrix()
A = pycc.CRSMatrix()

A is now an object representing a particular matrix format with no
data and size (0,0). This is ok so far.

assemble(A, ...);
For the assembling this is fine, but the initialization before the
assembly may not be general enough:

If a dolfin::SparsityPattern is built inside dolfin::assemble(...) for
any matrix type, then a conversion between sparsity pattern structures
may be needed, possibly ~doubling the cost of the initialization
stage. Of course, this is not really important unless the application
has sparsity patterns that change often. However, if the Assembler is
supposed to keep a cache of initialized patterns, this grows tricky.

Creating an interface dolfin::GenericSparsityPattern could partially
solve this, but dolfin::assemble(...) doesn't know the actual type of
its GenericTensor argument A, so how can it ever build a concrete
object of a dolfin::GenericSparsityPattern?

A possible solution to the above problem, which looks like an
inversion of the natural way to me...:

class GenericTensor
{
 GenericSparsityPattern * createSparsityPattern() const = 0;
};

As a sidenote (or maybe a very central point?), I think avoiding
thinking about parallellity can be a huge mistake at this point. I
like the Epetra solution, which makes parallell assembly very clean.


solve(A, x, b);

In dolfin (C++), this is dispatched compile-time to the appropriate
library-specific function, right?

In Python (PyDolfin, PyCC), that's not possible, so a similar syntax
will require either a run-time choice of solver function based on the
matrix type, or import-time choice of both "Matrix" and "solve"
function:
 from dolfin.Epetra import *
or
 from dolfin.PetSC import *
or
 from pycc.LinearAlgebra import *
or?


martin



2007/7/23, Anders Logg <logg@xxxxxxxxx>:
Here's the promised class diagram for the updated linear algebra design
(attached).

Here's 2 ways to look at it:

1. It's the same as in DOLFIN 0.7.0, but instead of Matrix being a
typedef for either uBlasSparseMatrix or PETScMatrix, it is now a wrapper
class for either of these two. The advantage is that Matrix always has
the same interface. Previously, when using Matrix in DOLFIN, you could
call uBlas-specific functions and then your code won't compile if you
recompile with PETSc instead of uBlas.

The main idea here is that if you just want a matrix and don't care
which representation is has, then just use a Matrix:

Matrix A;
assemble(A, ...);
solve(A, x, b);

If you know you need a PETSc matrix, then create a PETScMatrix.

2. It's a static envelope-letter design. The pointer to the
implementation is not allowed to dynamically change type. One could use
this to have runtime conversion between matrix representations, but we
don't allow this to keep things simple. The Function class (and
GenericFunction, DiscreteFunction, UserFunction etc) is an example of a
standard dynamic envelope-letter design.

/Anders


Martin Sandve Alnæs wrote:
> 2007/7/11, Ola Skavhaug <skavhaug@xxxxxxxxx>:
>> Martin Sandve Alnæs skrev den 11/07-2007 følgende:
>> > I suggest we discuss this next week over a beer since Ola, Anders and
>> > I are all in Zurich?
>> >
>> > In particular the first two issues:
>> > - Linear algebra
>> > - Function
>> > both of which will require some work on my part in pycc::FEMAssembler
>> > and SFC (SyFi Form Compiler). I'd like to get this over with soon.
>> >
>> > Martin
>>
>> I will not be in Zurich next week. I'm in Texas. Let's discuss this
>> when I
>> come back, or you go ahead and make a proposal.
>>
>> Howdy,
>> Ola
>
> I've changed the pycc:Generic* interfaces found in
> FEMAssembler/linalg/ to contain almost all the functions from
> dolfin::Generic*, with corresponding changes in the assembler.
>
> Proposal for the linear algebra:
> - EpetraMatrix,Vector,Scalar is moved to dolfin
> - CRSMatrix stays in pycc (moves to MatSparse, or is merged with CRS)
> - Use of the dolfin::GenericVector interface, its subclasses (like
> dolfin::EpetraVector), the underlying objects (like Epetra_FEVector),
> and numpy arrays can need some discussion. There are possible
> collisions between Olas previous philosophy "lets use raw double*
> everywhere" and having a shared array interface.
> - What to include in Generic* outside what's needed for assembly can
> be discussed.
>
> Proposal for functions:
> - We use dolfin::Function instead of pycc::*Function everywhere.
>
> Proposal for FEMAssembler after this is done:
> - rm -rf FEMAssembler/
>
> martin


Follow ups