← Back to team overview

ffc team mailing list archive

Re: Tensors

 

On Mon, Sep 05, 2005 at 12:07:51AM +0200, Johan Jansson wrote:

> I think the tensor notation is converging to a natural choice of
> representation. I've tested the Numeric approach a bit more, and it's
> not actually necessary to have a zero and identity member in the FFC
> algebra, the values 0.0 and 1.0 seem to be fine (so it's possible to
> use the Numeric functions zeros() and identity() to construct zero and
> identity matrices). Some languages have a dynamic type system, but it
> seems Python goes even further than that, which is good for us in this
> case at least.
> 
> I can now write the elasticity form like this (element definitions
> etc. not included):
> 
> def grad(u):
>     return [ [u[i].dx(j) for i in range(d)] for j in range(d) ]
> 
> def epsilon(u):
>     return 0.5 * (grad(u) + transpose(grad(u)))
> 
> def E(e, lmbda, mu):
>     Ee = 2.0 * multiply(mu, e) + \
>          multiply(lmbda, multiply(trace(e), identity(3)))
> 
>     return Ee
> 
> sigma = E(epsilon(u), lmbda, mu)
> 
> a = vdot(sigma, epsilon(v)) * dx

This looks very good, but I'm not completely convinced... The problem
is that when you explicitly construct arrays and do scalar products
manually, by actually summing up the terms rather than working through
an Index, FFC has no idea that it should treat a series of terms as a
scalar products.

Consider the following examples with Poisson written in two different
ways:

1. Using the standard index notation:

    a = u.dx(i)*v.dx(i)*dx

Generated code:

    real G0_0_0 = map.det*(map.g00*map.g00 + map.g01*map.g01);
    ...

    block[0] = 4.999999999999998e-01*G0_0_0 + 4.999999999999997e-01*G0_0_1
             + 4.999999999999997e-01*G0_1_0 + 4.999999999999996e-01*G0_1_1;

2. Writing out the scalar product explicitly:

    a = (u.dx(0)*v.dx(0) + u.dx(1)*v.dx(1))*dx

    real G0_0_0 = map.det*map.g00*map.g00;
    ...

    block[0] = 4.999999999999998e-01*G0_0_0 + 4.999999999999997e-01*G0_0_1
             + 4.999999999999997e-01*G0_1_0 + 4.999999999999996e-01*G0_1_1
             + 4.999999999999998e-01*G1_0_0 + 4.999999999999997e-01*G1_0_1
             + 4.999999999999997e-01*G1_1_0 + 4.999999999999996e-01*G1_1_1;

In the first case, FFC generates a tensor representation of the
element matrix as one tensor product. In the second case, FFC
generates two tensor products, which is potentially less optimal.

There are two ways to solve this: either force a user to use the index
notation and extend the index notation with operators grad, rot, div
etc, or build in some functionality into FFC to automatically find
common factors,

> All that's left for the notation to be identical (as identical as
> ASCII can be) to written form is to move to operators instead of
> functions for multiply(), vdot() etc. I'm not convinced that's a good
> idea though, I prefer this notation, since it's unambigous. A "*"
> operator can mean any kind of product. Since there are at least two
> products which make sense for matrices in this context (and probably
> more in other contexts), I don't think we have enough symbols. It
> always takes me a few minutes to figure out what the dot operator
> means in an article, perhaps I have finally found a remedy in this
> notation convention. It will lead to an excessive amount of
> parantheses though, but functional language programmers seem to be
> able to handle that just fine (i.e. Lisp).

This is one advantage of the index notation, since then everything is
spelled out in detail.

> It also doesn't seem necessary to explicitly construct a Numeric.array
> object using the array() function, the list representation works
> fine. It does seem necessary for the Numeric.array operators to work
> though, so perhaps it would be good to define grad() as:
> 
> def grad(u):
>     return array(( [ [u[i].dx(j) for i in range(d)] for j in range(d) ] ))
> 
> Then we can use the slicing operators (and other operators) from
> Numeric.array. For example, to extract the first 2 columns of grad(u),
> we can write:
> 
> grad(u)[:, 0:2]
> 
> Perhaps this can be generalized even further by simply defining a
> vec() function Anders mentioned before, and extending the concept to
> matrix() and tensor(), which would simply construct a Numeric.array
> from an FFC algebra object, i.e.:
> 
> i = Index()
> j = Index()
> 
> gradu = matrix(u[i].dx(j))
> 
> which would be a much cleaner fit with matrix-valued elements. I.e. a
> matrix-valued function "stress" could be manipulated in the same way:
> 
> i = Index()
> j = Index()
> 
> sigma = matrix(stress[i, j])
> 
> This would mean fewer special cases, which I think is almost always
> good. I still haven't grasped the Index concept fully though, so
> perhaps there are other better ways to accomplish this.

I'm still thinking about the proper way to do this. The approach you
suggest would work very well, allowing maximum flexibility for the
user to define new operators as you have made, but there might be a
penalty in the form of less optimal code.

> If we need tensors of rank > 2, then we would have to replace
> Numeric.array with something else. Is this relevant? Is there anyone
> who would actually need that functionality?

Numeric.array does tensors with arbitrary rank.

/Anders



Follow ups

References