dolfin team mailing list archive
-
dolfin team
-
Mailing list archive
-
Message #20927
Re: PyTrilinos question
-
To:
Bill Spotz <wfspotz@xxxxxxxxxx>
-
From:
"Garth N. Wells" <gnw20@xxxxxxxxx>
-
Date:
Sun, 23 Jan 2011 17:02:56 +0000
-
Cc:
DOLFIN Mailing List <dolfin@xxxxxxxxxxxxxxxxxxx>
-
In-reply-to:
<4CBA093F.2080908@cam.ac.uk>
-
User-agent:
Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.13) Gecko/20101208 Thunderbird/3.1.7
Hi Bill,
It's been a while since we exchanged emails, but we're running into some
serious issues with PyTrilinos finalising MPI prematurely.
On 16/10/10 21:21, Garth N. Wells wrote:
> Hi Bill,
>
> On 15/10/10 20:12, Bill Spotz wrote:
>> Hi Garth,
>>
>> Your understanding is correct. I import the atexeit module and
>> register a call to MPI::Finalize(). The logic is smart enough to
>> check wether MPI::Finalize() has already been called. It sounds like
>> what I need to do is perform this registration only if Epetra is the
>> package that actually calls MPI::Init().
>>
>
Any change that you could look at calling MPI::Finalize only if Epetra
is responsible for MPI? Our problem is that the atexit is called before
all our objects are destroyed. If we have any MPI-dependent objects
floating around, very bad things happen.
> That would help us. Then we would just have to make sure that we import
> our module before Epetra.
>
>> As far as a workaround, you might try importing Epetra first. If
>> that doesn't work, you could try importing the atexit module and use
>> inspection to determine what functions will be called upon exiting
>> python and possibly delete the call to Epetra::Finalize(). I don't
>> know if that would work, but it might be worth a try.
>>
It's possible in Python 3+ to deregister atexit functions, but not in
earlier versions.
Garth
>
> Unfortunately the import order doesn't help. I've dug around, and the
> problem is that functions registered with atexist are called before the
> destructor of our SWIG-wrapped C++ objects (irrespective of the moduke
> import order). Epetra::Finalize() is being called, and then the
> destructor of our MPI-dependent objects is called, which leads to an
> error because MPI has been finalised before these objects have been
> cleaned up.
>
> Garth
>
>
>> -Bill
>>
>> On Oct 15, 2010, at 11:45 AM, Garth N. Wells wrote:
>>
>>> Hi Bill,
>>>
>>> Are you the person to contact regarding a PyTrilinos? If not, I'd
>>> be happy if you could point to a mailing lists. Just in case, my
>>> question is below.
>>>
>>> I'm having some trouble with say
>>>
>>> from PyTrilinos import Epetra
>>>
>>> when I have already initialised MPI (via some object). I suspect
>>> that PyTrilinos is calling MPI::Finalize() while I still have some
>>> MPI-dependent objects that haven't been cleaned up yet. Does
>>> PyTrilinos check whether or not it is responsible for initialising
>>> and finalising MPI?
>>>
>>> Regards, Garth
>>>
>>> -- Dr Garth N Wells Department of Engineering University of
>>> Cambridge Trumpington Street Cambridge CB2 1PZ United Kingdom
>>>
>>> tel. +44 1223 3 32743 e-mail gnw20@xxxxxxxxx
>>> http://www.eng.cam.ac.uk/~gnw20/
>>
>> ** Bill Spotz ** **
>> Sandia National Laboratories Voice: (505)845-0170 ** ** P.O.
>> Box 5800 Fax: (505)284-0154 ** ** Albuquerque,
>> NM 87185-0370 Email: wfspotz@xxxxxxxxxx **
>>
>>