dolfin team mailing list archive
-
dolfin team
-
Mailing list archive
-
Message #21083
Re: [Question #143195]: How do I express local condensation in Dolfin?
I did something like this a while ago and attach the code.
Here, I use the Trilinos backend to handle the LA on a global level.
I guess you may perform the static condensation on the element level
and this may be more efficient.
Anyway, the code is attach.
Kent
On 28 January 2011 16:55, David Ham <question143195@xxxxxxxxxxxxxxxxxxxxx>wrote:
> New question #143195 on DOLFIN:
> https://answers.launchpad.net/dolfin/+question/143195
>
>
> Suppose I'm solving the linear shallow water equations with discontinuous
> velocity and continuous height:
>
> V = VectorFunctionSpace(mesh, 'DG', 1, dim=2)# Velocity space
>
> H = FunctionSpace(mesh, 'CG', 2) # Height space
>
> W=V*H # Mixed space of both.
>
> (v, q)=TestFunctions(W)
> (u, h)=TrialFunctions(W)
>
> M=inner(v,q)*dx+inner(u,h)*dx
>
> # Divergence term.
> Ct=-inner(u,grad(q))*dx+inner(avg(u),jump(q,n))*dS
> # Pressure gradient operator
> C=-g*Depth*adjoint(Ct)
>
> # Coriolis term
> F=params["f"]*inner(v,as_vector([-u[1],u[0]]))*dx
>
> If I then apply backward Euler timesteping, I get a system whose block
> structure is:
>
> / M_u-F C \ /u_new \ / rhs_u\
> | | | | = | |
> \ Ct M_h/ \ h_new / \ rhs_h/
>
> This system is (a) big and (b) non-symmetric. However, the standard trick
> at this stage is to use static condensation to turn this into a discrete
> wave equation which is much smaller and is SPD:
>
> (-Ct(M_u-F)^(-1)C + M_h) h_new = Ct(M_u-F)^(-1)rhs_u + rhs_h
>
> The reason this is affordable is that M_u-F contains no derivatives and
> therefore no facet integrals. This means that the inversion commutes with
> assembly so one can directly assemble the inverse matrix by first inverting
> the local tensor and then conducting global assembly. (M_u-F)^(-1) has the
> same sparsity as M_u-F.
>
> So, the question is, how do I get Dolfin to assemble (M_u-F)^(-1) rather
> than M_u-F?
>
> (The next question will be, can I multiply sparse matrices as above in
> Dolfin, but one thing at a time!)
>
> Many thanks,
>
> David
>
> --
> You received this question notification because you are a member of
> DOLFIN Team, which is an answer contact for DOLFIN.
>
> _______________________________________________
> Mailing list: https://launchpad.net/~dolfin<https://launchpad.net/%7Edolfin>
> Post to : dolfin@xxxxxxxxxxxxxxxxxxx
> Unsubscribe : https://launchpad.net/~dolfin<https://launchpad.net/%7Edolfin>
> More help : https://help.launchpad.net/ListHelp
>
from dolfin import *
parameters["linear_algebra_backend"] = "Epetra"
import MLPrec, Krylov, MinRes, SaddlePrec
from PyTrilinos import Epetra, EpetraExt, AztecOO, TriUtils, ML
from arguments import get_command_line_arguments
cl_args = get_command_line_arguments()
N = 2
if cl_args.has_key("N"):
N = int(cl_args["N"])
mesh = UnitCube(N,N,N)
V = FunctionSpace(mesh, "N1curl", 1)
Q = FunctionSpace(mesh, "CG", 1)
mixed = V + Q
u,p = TrialFunctions(mixed)
v,q = TestFunctions(mixed)
a = dot(u,v)*dx + dot(curl(v), curl(u))*dx \
+ dot(grad(q),u)*dx \
+ dot(grad(p),v)*dx \
- p*q*dx
A = assemble(a, form_compiler_options = {"representation": "tensor"})
u,p = TrialFunction(V), TrialFunction(Q)
v,q = TestFunction(V), TestFunction(Q)
aa = dot(u, v)*dx + dot(curl(v), curl(u))*dx
bb = dot(grad(p), v)*dx
ff = q*p*dx
AA = assemble(aa, form_compiler_options = {"representation": "tensor"})
BB = assemble(bb, form_compiler_options = {"representation": "tensor"})
BF = assemble(bb, form_compiler_options = {"representation": "tensor"})
FF = assemble(ff)
AA_epetra = down_cast(AA).mat()
BB_epetra = down_cast(BB).mat()
BF_epetra = down_cast(BF).mat()
FF_epetra = down_cast(FF).mat()
ff_vec = Epetra.Vector(FF_epetra.RowMap())
FF_epetra.InvRowSums(ff_vec)
BF_epetra.RightScale(ff_vec)
CC = Epetra.FECrsMatrix(Epetra.Copy, AA_epetra.RowMap(), 100)
err = EpetraExt.Multiply(BB_epetra, False, BF_epetra, True, CC)
DD = EpetraMatrix(CC)
EE = assemble(aa, DD, reset_sparsity=False, add_values=True)
B = SaddlePrec.SaddlePrec(EE,FF)
x = Vector(A.size(0))
from numpy import random
x[:] = random.random(A.size(0))
b = Vector(A.size(0))
b[:] = 0
x, e, iter = Krylov.CGN_BABA(B, A, x, b, 10e-10, True, 2000)
print "Number of iterations ", iter
print "Sqrt of condition number of BABA ", sqrt(e[len(e)-1]/e[0])
u,p = TrialFunctions(mixed)
v,q = TestFunctions(mixed)
b = dot(u,v)*dx + dot(curl(v), curl(u))*dx \
+ dot(grad(p),grad(q))*dx
B = assemble(b)
P = MLPrec.MLPreconditioner(B)
x[:] = random.random(A.size(0))
b = Vector(A.size(0))
b[:] = 0
x, e, iter = Krylov.CGN_BABA(P, A, x, b, 10e-10, True, 200)
print "Number of iterations ", iter
print "Sqrt of condition number of BABA ", sqrt(e[len(e)-1]/e[0])
Follow ups
References