← Back to team overview

fenics team mailing list archive

Re: components, a counter-example?

 

On Mon, Sep 26, 2005 at 02:34:34PM -0500, Robert C. Kirby wrote:
> I'd like to raise the question as to what degree we can truly  
> modularize all of our code into discrete pieces.  I am going to use  
> the multiadaptive ODE solver for the sake of discussion because:
> 
> 1.) It's a really neat piece of work, one of the most interesting  
> ideas in ODE I've seen, that other people should be aware of and use.
> 2.) It is currently neither parallel nor available as a standalone  
> package.
> 3.) I like to pick on Anders :)

Please go ahead... :-)

> Suppose that we want to parallelize and distribute it to maximize its  
> effect on the world and become "the standard".  (As an aside, there  
> are parallel time-stepping packages like Sundials that are seen as  
> "standards" in certain senses.  A parallel multi-adaptive solver that  
> can also do DAE could be a worthy competitor)  Obviously to do this,  
> we will need some way of communicating components of vectors across  
> processors.  As usual, there are two main options here:
> 
> 1.) we can roll our own using standard C arrays and low-level MPI  
> calls :(   I believe that Sundials has recreated things that look  
> like PETSc internally to avoid doing very low-level stuff at every turn.
> 2.) we can use some well-developed, maintained package that does this  
> for us, like PETSc.
> 
> Moreover, depending on how we implement algorithms inside the solver,  
> we might use some linear and nonlinear solution technology, further  
> increasing the reliance on PETSc inside.
> 
> So, it seems that if we want to leverage somebody else's parallelism,  
> we have to buy into a particular package.  Sure, somebody could  
> always strip out PETSc and use Trilinos vectors, but that involves  
> grubbies and is tough to automate without further grubbies.

One could parametrize over the choice of linear algebra backend and
do everything through a wrapper/interface that just provides the
minimal set of functions that is actually used. I think Kevin does
something like this in Sundance with the mesh (class MeshBase).

I don't say that one should always make everything generic, but try to
do it whenever there's a reason, like if you want to be able switch
between different linear algebra backends.

> Now, suppose that I want to take the new standard in parallel ODE/DAE  
> and have Sundance use it.  This seems rather clunky, as I'm forced to  
> compile and link *two* linear algebra packages into my code --  
> Trilinos for Sundance and PETSc for the time-stepper.  Even if I  
> never see PETSc thanks to a well-designed interface, it can  
> contribute significantly to the size of the executable and forces me  
> to keep up installations of two big codes.

Not necessary, see above.

> This seems clunky.  Even though it's less modular, it would simplify  
> my life as an end-user if the multiadaptive solver were presented in  
> an integrated fashion as part of Trilinos.  (Conjecture: Part of  
> setting a standard in scientific computing is making end-users who  
> don't necessarily share your software engineering philosophy or other  
> kinds of philosophy happy.)

Packaging is a very important issue, but it's a separate issue. You
can have GNOME on most GNU/Linux distributions and a user does not
have to know that GNOME is composed of a million different components.

> From Anders' standpoint, this is obviously suboptimal -- he would  
> have to develop in Trilinos, release his code in Trilinos, etc and  
> *not* as a stand-alone component.  Further, if he wants all the PETSc  
> universe to have his time-stepping technology, then he must develop  
> two different versions of the code...
> 
> Software should be modularized as much as makes sense -- I don't  
> think it can be absolutized.  We should discuss this both in general  
> as well in the particular case of helping Anders' very nice method  
> become the standard for time-stepping.

I agree. Software should be generic when it make sense.

Concerning the multi-adaptive solver, this touches another question we
discussed before. I'm not pushing it very hard simply because it's not
ready to be pushed. Sure, the solver is cool and it's faster than
other solvers on some problems but the overhead is still substantial.
That's also the reason I'm not jumping into running it in parallel or
DAEs. There's still a lot of work to do before then.

> This is also an illustration of how we can't just come up with a good  
> idea and declare victory.  Just having a stand-alone module doesn't  
> make it a standard.  You have to get people to use it and swear by  
> it.  (David Keyes' enthusiasm has contributed heavily to PETSc's  
> success.)  One approach to making this happen is to suffer some  
> inconvenience (loss of identity?) and implement the ODE solver in a  
> specific widely-used system like PETSc where people are more likely  
> to use it with a minimal of install/configure/maintain effort on  
> their part.

Same thing here. I'm not confident enough with the solver as it is to
push it onto someone else.

> Besides breadth of distribution, it allows more fair comparisons
> with current "standard" methods in terms of accuracy, cost, etc and
> can help justify using the method to people who don't care anything
> about Galerkin methods, just getting the solution.  This can help
> make the technical case and hopefully not tell us things we don't
> want to hear.

In general, no one cares about the size of the global error. At least
not when solving ODEs. People only care about the local error (which
is something very different).

> Are there other approaches that will maximize Anders' impact on
> scientific computing [...]

This would make an interesting discussion... :-)

> [...] and generally guide the FEniCS project in
> software engineering?

/Anders



Follow ups

References