fiat team mailing list archive
-
fiat team
-
Mailing list archive
-
Message #00010
Re: entity_ids for DiscontinuousVectorLagrange
On Thu, May 12, 2005 at 09:42:46AM -0500, Robert C.Kirby wrote:
> On May 12, 2005, at 9:36 AM, Anders Logg wrote:
>
> >On Thu, May 12, 2005 at 09:06:40AM -0500, Robert C.Kirby wrote:
> >
> >>Perhaps it would be good to make it consistent between the two, just
> >>in
> >>case we need it.
> >
> >ok. Tell me if and when you have pushed this into FIAT and I can check
> >that I get correct results for the dof map.
> >
>
> I am working at home, and I seem to go need to beat bombadil -- I can't
> ssh into it from here.
> Perhaps searching for higher order linear dependencies has sent it to
> an early grave :)
I also have problems reaching bombadil. I can wait.
Maybe we could aim at releasing new versions of FIAT, FFC and DOLFIN
early next week? (Or tomorrow if we have it working by then.)
The missing piece right now is to do the projections/interpolations.
When that works, everything should be in place for general order
continuous Lagrange in 2D, q <= 2 in 3D (still need to decide on a
convention for the orientation of edges in 3D) and general order
continous Lagrange in 2D and 3D. Crouzeix-Raviart should also work.
> >I agree. In addition to FFC being able to generate potentially faster
> >run-time code with such a structure, it would be faster compile
> >time. When I did this manually before (checking for functions that are
> >zero), the speedup was a factor 2 or 3 but then the evaluation of
> >integrals was much slower so the speedup might not be that good.
> >
>
> I'm thinking of something much more on compile time. First, you might
> be able to zero out entire blocks, not just particular functions
> (vector Laplacian). Second, you will only look at scalar functions on
> each block as you know only one component of the vectors will be
> nonzero. Third, if you break the problem into p different blocks,
> these are trivially parallel. You could use Python MPI bindings (e.g.
> pyMPI by Pat Miller. Matt also has some bindings that I think are
> wretched to install) to farm out the tensor computations for each block
> to different processors. You could run this on fledermaus/ficus at UC
> if you want. For 3D elasticity, remember there are nine blocks to
> compute. If all the blocks were equal cost to compute (in fact the
> diagonal is worse for elasticity), you could get a speedup of nine for
> this (really it will be more like three or four since the diagonals
> dominate).
Compiling the forms in parallel will be interesting to try, but I feel
it's not on the top of my todo-list right now.
/Anders
Follow ups
References