← Back to team overview

dolfin team mailing list archive

[Bug 747297] Re: Python: Cache tensor types

 

The dict would then look something like:

  { <class 'dolfin.cpp.Vector'>: <class 'dolfin.cpp.PETScVector'>,
    <class 'dolfin.cpp.Matrix'>: <class 'dolfin.cpp.PETScMatrix'>}

When changing to epetra backend during a run this will break.

Maybe we should try to attach the down casted type directly to
GenericFoo/Foo. Then we use this to down_cast each tensor instead of
trying to look it up each time

Johan

-- 
You received this bug notification because you are a member of DOLFIN
Team, which is subscribed to DOLFIN.
https://bugs.launchpad.net/bugs/747297

Title:
  Python: Cache tensor types

Status in DOLFIN:
  New

Bug description:
  In a matrix-multiplication heavy Python workload, I see something like
  5-10% of the time being spent in get_tensor_type(). The attached patch
  memoizes the result, per type. Seems to work fine, but should be
  sanity checked (is per-type result ok?).



References