dolfin team mailing list archive
-
dolfin team
-
Mailing list archive
-
Message #22411
[Bug 747297] Re: Python: Cache tensor types
** Patch added: "0005-Make-python-linear-algebra-faster-by-caching-tensor-.patch"
https://bugs.launchpad.net/bugs/747297/+attachment/1963818/+files/0005-Make-python-linear-algebra-faster-by-caching-tensor-.patch
--
You received this bug notification because you are a member of DOLFIN
Team, which is subscribed to DOLFIN.
https://bugs.launchpad.net/bugs/747297
Title:
Python: Cache tensor types
Status in DOLFIN:
New
Bug description:
In a matrix-multiplication heavy Python workload, I see something like
5-10% of the time being spent in get_tensor_type(). The attached patch
memoizes the result, per type. Seems to work fine, but should be
sanity checked (is per-type result ok?).
References