← Back to team overview

dolfin team mailing list archive

Re: VariationalProblem interface

 

On 13 June 2011 16:37, Anders Logg <logg@xxxxxxxxx> wrote:
> On Mon, Jun 13, 2011 at 03:30:22PM +0100, Garth N. Wells wrote:
>>
>>
>> On 13/06/11 14:54, Anders Logg wrote:
>> > On Mon, Jun 13, 2011 at 02:42:57PM +0100, Garth N. Wells wrote:
>> >>
>> >>
>> >> On 13/06/11 14:10, Anders Logg wrote:
>> >>> On Mon, Jun 13, 2011 at 01:09:20PM +0100, Garth N. Wells wrote:
>> >>>>
>> >>>>
>> >>>> On 13/06/11 12:51, Marie E. Rognes wrote:
>> >>>>> On 06/13/2011 10:03 AM, Garth N. Wells wrote:
>> >>>>>>
>> >>>>>>
>> >>>>>> On 12/06/11 22:54, Anders Logg wrote:
>> >>>>>>> On Wed, Jun 08, 2011 at 11:47:10PM +0200, Anders Logg wrote:
>> >>>>>>>> I'm on a cell phone and can't engage in any discussions but I just
>> >>>>>>>> want to throw something in. Marie and I discussed at some point
>> >>>>>>>> another option which is to write everything as F = 0 and let UFL
>> >>>>>>>> compute J even for linear problems. J would be a optional
>> >>>>>>>> variable. Then we could have a common interface and also a common
>> >>>>>>>> solver.
>> >>>>>>>
>> >>>>>>
>> >>>>>> I think that this would be a nice option, but I don't think that it can
>> >>>>>> work without a more elaborate interface than just
>> >>>>>>
>> >>>>>>    pde = VariationalProblem(F, bcs)
>> >>>>>>
>> >>>>>> because UFL cannot know which coefficient in a form is unknown, and
>> >>>>>> therefore which function to compute the linearisation with respect to.
>> >>>>>> Moreover,
>> >>>>>>
>> >>>>>>   - UFL cannot compute the linearisation for all forms
>> >>>>>>   - In some cases it's not desirable to let UFL do the linearisastion
>> >>>>>>
>> >>>>>
>> >>>>> Some specific responses: First, the moment solve is called (with a
>> >>>>> coefficient), which function to compute the linearization with respect
>> >>>>> to, is known.
>> >>>>
>> >>>> Which means that the simple call
>> >>>>
>> >>>>   u = pde.solve()
>> >>>>
>> >>>> will not be possible.
>> >>>
>> >>> Why do you call it "pde" and not "problem"?
>> >>
>> >> The name has nothing to do with the discussion.
>> >
>> > This discussion is about just that: which names to use.
>> >
>>
>> Yes, the class name, but not what I dream up for an object:
>>
>>   xxx = FooVariationalFoo(....)
>>
>>   u = xxx.solve()
>>
>> >> I could call it xxx.
>> >>> Once upon a time it was
>> >>> called that, so yet another option would be
>> >>>
>> >>>   LinearPDE
>> >>>   NonlinearPDE
>> >>>
>> >>> which is short enough.
>> >>>
>> >>> What speaks against this, and likely one of the reasons we switched to
>> >>> VariationProblem is that the above looks weird when used to define a
>> >>> projection which does not involve any derivatives.
>> >>>
>> >>> And here's yet another option:
>> >>>
>> >>>   LinearProblem
>> >>>   NonlinearProblem
>> >>>
>> >>
>> >> This looks ok, but it's a secondary issue for if we agree that we should
>> >> have separate classes for linear and nonlinear equations (or have we
>> >> agreed this?).
>> >
>> > Not yet. I think we should settle first one whether we want
>> >
>> > 1. a common class for (non)linear problems
>> > 2. two classes
>> >
>>
>> Agree. I strongly support 2.
>>
>> What I oppose vehemently is distinguishing between linear and nonlinear
>> by the order of the lhs/rhs arguments. As pointed out by Martin, the line
>>
>>    VariationalProblem(C, D)
>>
>> says nothing to the reader about the type of problem, and it has led to
>> confusion.
>>
>> The code
>>
>>    LinearVariationalProblem(C, D)
>>
>> or
>>
>>    NonlinearVariationalProblem(C, D)
>>
>> (class names subject to change) are clear, and the code can test for the
>> arity of C and D and throw an error if the order is mixed up.
>
> I agree with this. It's important that we can check input arguments.
>
> What are the votes in favor of either of these two options?
>
> 1. One class VariationalProblem for all (non)linear problems.
>
> 2. Two classes LinearProblem and NonlinearProblem.

I vote for 2 with the arguments of readability/debugging as Martin and
Garth have promoted,
and like Garth I also don't think that one class should try to do everything.

Kristian

> --
> Anders
>
>
>> >>>>> Second, the Jacobian could definitely be provided as an
>> >>>>> optional argument for those cases where it is not feasible/desirable
>> >>>>> for UFL to do the linearization. See below for longer, more general
>> >>>>> rant.
>> >>>>>
>> >>>>>>> Any further thoughts on this? It's important we get this right.
>> >>>>>>>
>> >>>>>>> I can live with
>> >>>>>>>
>> >>>>>>>    LinearVariationalProblem
>> >>>>>>>    NonlinearVariationalProblem
>> >>>>>>>
>> >>>>>>> if everyone else thinks that's the best option,
>> >>>>>
>> >>>>> I don't think it is the best option.
>> >>>>>
>> >>>>>>> but don't really like
>> >>>>>>> it since it's very long.
>> >>>>>
>> >>>>> My reasons do not really relate to the length of the name, but rather
>> >>>>> that I do not see why we should make the difference between linear and
>> >>>>> nonlinear variational problems greater in the _highest level
>> >>>>> interface_ of DOLFIN. I think this just makes things more cumbersome
>> >>>>> (a) in terms of debugging and (b) in terms of explaining how to solve
>> >>>>> variational problems to FEniCS beginners.
>> >>>>>
>> >>>>
>> >>>> My experience is the complete opposite of this.
>> >>>
>> >>> I agree with Marie here. It's a bit weird when we do
>> >>>
>> >>>   u = TrialFunction(V)
>> >>>   ...
>> >>>   u = Function(V)
>> >>>   problem.solve(u) # or u = problem.solve()
>> >>>
>> >>
>> >> I agree that this is not elegant, but I think that it's a secondary
>> >> issue (but still a something to improve).
>> >
>> > I don't think this is an issue we can fix in small increments. We need
>> > to find out what kind of interface we want, then implement it and then
>> > not touch it ever again.
>> >
>>
>> The above is part of the Python interface. If the C++ interface is
>> resolved, we can turn our attention to prettifying Python issues, like
>> the above.
>>
>> >>> It's almost as confusing as the V=V thing we had for a while.
>> >>>
>> >>>>> My typical pedagogical use case looks something like this (in Python):
>> >>>>> Say you have explained to someone how to solve stationary Stokes with
>> >>>>> Taylor-Hood, and then want to extend to Navier-Stokes. Then ideally
>> >>>>> all that should be necessary is to add the nonlinear term to the
>> >>>>> form(s). But no. In addition, you at least have to replace the
>> >>>>> TrialFunction with a Function (which typically leads to a longer
>> >>>>> discussion on what a TrialFunction represents and why that is used for
>> >>>>> linear problems when the same ansatz is made for the two in most text
>> >>>>> books), and then you have to call solve with the same Function instead
>> >>>>> of just returning it (which is also on my top 3 list of pitfalls). I
>> >>>>> don't see how adding
>> >>>>> LinearVariationalProblem/NonlinearVariationalProblem to the list makes
>> >>>>> it better.  The same (but the other way) goes when wanting to debug
>> >>>>> Navier-Stokes by reducing to Stokes.
>> >>>>>
>> >>>>> In essence, I've started treating all variational problems as
>> >>>>> nonlinear.
>> >>>>>
>> >>>>
>> >>>> This might be fine for toy problems, but not for real problems. There is
>> >>>> overhead in a nonlinear solve to check for convergence, which is not
>> >>>> required if the eqn is linear.
>> >>>
>> >>> I don't see why this should be restricted to toy problems. Defining
>> >>> the Jacobian is always an option and then it would be just like today.
>> >>>
>> >>
>> >> Because if I'm solving a very large linear problem, I'm going to be
>> >> annoyed if the code insists on unnecessarily assembling the residual.
>> >
>> > Aren't all "real world problems" nonlinear? ;-)
>> >
>>
>> Fortunately no.
>>
>> >>> Perhaps it would be possible for UFL to check that a problem is linear
>> >>> (by differentiating twice and getting zero) and then we could remove
>> >>> the overhead of the extra residual evaluation.
>> >>>
>> >>
>> >> Too much magic over simplicity for my liking. We also need to bear in
>> >> mind that we're talking about the C++ interface too. I think a better
>> >> starting point is the C++ interface. The Python interface will follow
>> >> naturally, and sugar can be added later.
>> >
>> > I agree with that. Say for a moment that we choose to implement the
>> > common (non)linear interface. Then what should the interface look
>> > like in C++?
>> >
>> > 1. Poisson::BilinearForm a(V, V);
>> >    Poisson::LinearForm L(V);
>> >
>> > 2. Poisson::Residual F(V);
>> >    Poisson::Jacobian J(V, V);
>> >
>> > 3. Poisson::VariationalProblem F(V, V);
>> >
>> >> What if the equation is linear, but I want to solve it as a nonlinear
>> >> equation? I do this all the time as a first step for testing when
>> >> testing a nonlinear solver? I know we can always 'add another option' to
>> >> handle various case, but it all add complexity.
>> >
>> > Isn't that an argument for having a common nonlinear solver for all
>> > problems?
>> >
>>
>> No. It's an argument that we shouldn't try to make one class do
>> everything, and that the code should be explicit.
>>
>> >> Another factor is that there is more to solving nonlinear equations than
>> >> vanilla versions of Newton's method. It would nice in the future to have
>> >> a NonlinearFoo that does more than what we now, and without increasing
>> >> the code complexity for a straightforward linear eqn solve.
>> >
>> > Perhaps, but one could also argue that is yet another argument in
>> > favor of a single nonlinear solver. There is some code duplication
>> > already in the two linear/nonlinear solvers, setting options etc.
>> >
>>
>> Ditto what I wrote above. Trying to make one class do too much leads to
>> complications.
>
>
> _______________________________________________
> Mailing list: https://launchpad.net/~dolfin
> Post to     : dolfin@xxxxxxxxxxxxxxxxxxx
> Unsubscribe : https://launchpad.net/~dolfin
> More help   : https://help.launchpad.net/ListHelp
>


Follow ups

References