dolfin team mailing list archive
-
dolfin team
-
Mailing list archive
-
Message #22417
[Bug 747297] Re: Python: Cache tensor types
Aha, not so simple then. I don't quite get the multiple backend use
case, but if it's supported then it's supported.
I didn't quite understand your suggestion. Do you mean to make
down_cast() a method on Matrix et al instead of a global method? That
sounds nice...
Anyhow: I'm away for six weeks, starting tomorrow, and don't know how
much I'll be able to communicate when away. Feel free to leave this bug
(and others, don't know if I have time now to report more) unresolved.
Unless you want to fix them, of course :)
--
You received this bug notification because you are a member of DOLFIN
Team, which is subscribed to DOLFIN.
https://bugs.launchpad.net/bugs/747297
Title:
Python: Cache tensor types
Status in DOLFIN:
New
Bug description:
In a matrix-multiplication heavy Python workload, I see something like
5-10% of the time being spent in get_tensor_type(). The attached patch
memoizes the result, per type. Seems to work fine, but should be
sanity checked (is per-type result ok?).
Follow ups
References