← Back to team overview

fenics team mailing list archive

Re: components, a counter-example?

 


So, it seems that if we want to leverage somebody else's parallelism,
we have to buy into a particular package.  Sure, somebody could
always strip out PETSc and use Trilinos vectors, but that involves
grubbies and is tough to automate without further grubbies.

One could parametrize over the choice of linear algebra backend and
do everything through a wrapper/interface that just provides the
minimal set of functions that is actually used. I think Kevin does
something like this in Sundance with the mesh (class MeshBase).

I don't say that one should always make everything generic, but try to
do it whenever there's a reason, like if you want to be able switch
between different linear algebra backends.


There is a certain amount of work associated with doing this, and is easy to find yourself in one-off situations where you do something like this for every different situation (perhaps this is a good reason for CCA?) It's also unclear whether you can make an abstraction that will make "proper" use of the underlying packages. But for just using a couple of features (get & set values, for example) it's not so bad; for hooking up nonlinear solvers, with lots of parameters, it gets harder. Granted, you can "default to PETSc"...


Now, suppose that I want to take the new standard in parallel ODE/DAE
and have Sundance use it.  This seems rather clunky, as I'm forced to
compile and link *two* linear algebra packages into my code --
Trilinos for Sundance and PETSc for the time-stepper.  Even if I
never see PETSc thanks to a well-designed interface, it can
contribute significantly to the size of the executable and forces me
to keep up installations of two big codes.

Not necessary, see above.


No, but I'm forced to do a certain amount of code development that I don't want to do -- and I am a stupid physicist who hates to develop code and cares nothing for the virtues of Galerkin methods and adjoint estimates and takes whatever timestepper is inside Trilinos because they tell me it works and I don't have to write any "low-level" code to use it. The claim I put forward is that "setting a standard" in so far as it correlates with maximizing the user base can be in tension with designing the code the way that may seem best to you as a sophisticated programmer.

This seems clunky.  Even though it's less modular, it would simplify
my life as an end-user if the multiadaptive solver were presented in
an integrated fashion as part of Trilinos.  (Conjecture: Part of
setting a standard in scientific computing is making end-users who
don't necessarily share your software engineering philosophy or other
kinds of philosophy happy.)

Packaging is a very important issue, but it's a separate issue. You
can have GNOME on most GNU/Linux distributions and a user does not
have to know that GNOME is composed of a million different components.

Trilinos is also a million different packages, actually (each with it's *own* configure script, as I am painfully reminded each time I run its configuration for Sundance...)

From Anders' standpoint, this is obviously suboptimal -- he would
have to develop in Trilinos, release his code in Trilinos, etc and
*not* as a stand-alone component.  Further, if he wants all the PETSc
universe to have his time-stepping technology, then he must develop
two different versions of the code...

Software should be modularized as much as makes sense -- I don't
think it can be absolutized.  We should discuss this both in general
as well in the particular case of helping Anders' very nice method
become the standard for time-stepping.

I agree. Software should be generic when it make sense.

Concerning the multi-adaptive solver, this touches another question we
discussed before. I'm not pushing it very hard simply because it's not
ready to be pushed. Sure, the solver is cool and it's faster than
other solvers on some problems but the overhead is still substantial.
That's also the reason I'm not jumping into running it in parallel or
DAEs. There's still a lot of work to do before then.


I claim load balancing will be a big issue in parallel as it is with all spatially adaptive methods. On the other hand, even the monoadpative solver could be a big breakthrough for some of these codes (*arbitrary* order + global error control) and would be easier to make efficient enough for prime time.


In general, no one cares about the size of the global error. At least
not when solving ODEs. People only care about the local error (which
is something very different).

To set a new standard, you have to say this loudly, use small words (you seem to have done this well so far), and show concrete examples as to why controlling only local error gets you into trouble (things like collapsing bridges or exploding power plants are especially helpful at convincing engineers; theorems are not). In the absence of catastrophe, people might consider staggering performance results on particular problems they care about. These problems obviously are very special cases and have nothing to do with problems you or anybody else cares about.


Rob




Follow ups

References