← Back to team overview

dolfin team mailing list archive

Re: [Bug 747297] Re: Python: Cache tensor types

 

I just realized: if it's cached *per object* and my other idea about vector
re-use goes in, then most of the benefit will be there.
Den 2. apr. 2011 04.25 skrev "Johan Hake" <747297@xxxxxxxxxxxxxxxxxx>
følgende:
> On Friday April 1 2011 14:12:00 Joachim Haga wrote:
>> Aha, not so simple then. I don't quite get the multiple backend use
>> case, but if it's supported then it's supported.
>
> Well, it is not prohibited. It is probably not used by anyone. The point
is
> that Python is Python and any thing that is not prohibited is possible and
we
> need to make sure it is not possible to multiply an EpetraMatrix with a
> PETScVector.
>
>> I didn't quite understand your suggestion. Do you mean to make
>> down_cast() a method on Matrix et al instead of a global method? That
>> sounds nice...
>
> Yes something like that. But I am not sure we are able to get around a
check
> each time a Vector/Matrix is created. And then we are back to step one, I
> guess.
>
>> Anyhow: I'm away for six weeks, starting tomorrow, and don't know how
>> much I'll be able to communicate when away. Feel free to leave this bug
>> (and others, don't know if I have time now to report more) unresolved.
>> Unless you want to fix them, of course :)
>
> Have a good vacation!
>
> Johan
>
> --
> You received this bug notification because you are a direct subscriber
> of the bug.
> https://bugs.launchpad.net/bugs/747297
>
> Title:
> Python: Cache tensor types
>
> Status in DOLFIN:
> New
>
> Bug description:
> In a matrix-multiplication heavy Python workload, I see something like
> 5-10% of the time being spent in get_tensor_type(). The attached patch
> memoizes the result, per type. Seems to work fine, but should be
> sanity checked (is per-type result ok?).
>
> To unsubscribe from this bug, go to:
> https://bugs.launchpad.net/dolfin/+bug/747297/+subscribe

-- 
You received this bug notification because you are a member of DOLFIN
Team, which is subscribed to DOLFIN.
https://bugs.launchpad.net/bugs/747297

Title:
  Python: Cache tensor types

Status in DOLFIN:
  New

Bug description:
  In a matrix-multiplication heavy Python workload, I see something like
  5-10% of the time being spent in get_tensor_type(). The attached patch
  memoizes the result, per type. Seems to work fine, but should be
  sanity checked (is per-type result ok?).



Follow ups

References