← Back to team overview

fenics team mailing list archive

Re: FEniCS

 

<x-tad-smaller>use a tool shouldn't be made on whether said tool has undesirable features, but on whether it is easier to live with those features or to redo the work yourself. The only point I heard Rob trying to make in the recent discussion was that you really want to consider whether it is a good idea to spend several months writing your own symbolic system rather than a couple of days figuring out how to work with an already existing one.</x-tad-smaller>

If the requirement is to solve really big forward and backward problems on really big machines, then Sundance is something that hangs together and does the whole shebang, thanks in large part to its symbolic system. It is the state of the art in this aspect - nothing else that exists today can solve these problems on this scale on these machines. (Note that I didn't suggest that we all become FreeFEM developers). This is a statement about *current capabilities* and not whether some other system such as DOLFIN will do much better eventually.

There are two issues at hand:
1.) The automation of computational mathematical modeling -- there is a code that does this. I was presenting a thesis (not necessarily one I would push very hard on) that the best way to proceed is to pool resources to make the most complete existing system (in a certain norm) better, despite our differences. Saying "we should all become sundance developers" was a way of putting it, for the sake of argument. Taking out existing pieces of Sundance and reusing them would be an alternate approach that would be like letting Matt & Barry parallelize our linear algebra for us.


2.) research/new ideas. For me, FEniCS is about new ideas about finite element computation rather than a big development push. It's not just about getting code, it's about figuring out the mathematics of the code (not the mathematics of the numerical method). This is an alternate/contrasting reason to focus on components -- we can try out fundamentally new ideas, develop and test them in a controlled context. We can make contributions at the idea level separate from any particular software incarnation. For example, I think that FFC and FErari represent fundamentally new and exciting ideas in finite element computation. Were we constrained to work *only* in the context of Sundance (or FreeFEM, or FEMLAB), these might be very difficult to try out. Ultimately, if we are going to improve existing codes such as Sundance or develop ones that outperform them, it has to come from the injection of new ideas.

This is a weaker view of components perhaps than Anders has -- if something like FFC or FErari matures to the point where it can truly stand alone (I don't think FFC is there yet but it is certainly much closer than FErari), great -- lots of people can use it. On the other hand, even if these are just something that sit within the DOLFIN framework, we've still learned something about algorithms that we didn't know before. There can also be interoperability at the idea level - for example, extending Sundance's run-time system with similar kinds of precomputation that are inspired by and properly cite FFC. This is not a dogmatic approach to a particular software engineering principle -- I am inclined to agree with Kevin that a complete modularity will not be fully practical to achieve -- but a recognition that things can and should be polished in bite-sized chunks and made available at such a level of granularity if possible.

So I speak in a paradox:
1.) I like having a large system that hangs together and am hesitant to put a lot of effort into redeveloping stuff that's already there.
2.) I like having little components where I can try out my mad scientist ideas. If these components really stand on their own and are useful to other projects as FIAT seems to be, great. If I learn something that can help me make a collaborating code better, great. If I learn something that tells me my ideas or implementations are a bunch of crap, I can scrap them.

These reflect a couple of things about my present position:
1.) I have funding to work on FIAT in certain contexts and tackle certain kinds of problems with it and not to reinvent bigger systems.
2.) Tenure-track job: It is more important to come up with cool ideas and to solve problems than it is to be immersed in code development that isn't really cutting edge from the or math/CS standpoint. I can work on FIAT in the context of existing codes, I can work on FErari by getting data from FFC, etc. But unless I have a new mathematical theory of log systems that follows from the Masur Separation Theorem, writing such a system can be a dangerous time-suck for me, I have been advised "you don't get tenure for writing code, you get tenure for writing papers". I used to think that writing a mesh library fell into this category, but then Sieve came along -- maybe there's something to be said for the axiomitization of log systems :) This makes me very inclined to use as much existing code as possible.
3.) My disposition in research anyway seems to be to come up with elegant formulations of "little" problems (e.g. FIAT, FErari) that seem very important. Hopefully these little pieces can make bigger projects better.

So, this expounds upon my previous emails and hopefully explains how I am approaching research.

Rob

Follow ups

References