← Back to team overview

dolfin team mailing list archive

Re: SubSystemsManager mess

 


On 22/03/11 19:13, Anders Logg wrote:
> On Tue, Mar 22, 2011 at 06:29:23PM +0000, Garth N. Wells wrote:
>> Our 'automatic' initialisation of MPI and PETSc is a real mess at the
>> moment. It seemed to work before by good luck. The major issue is
>> controlling order in the singleton class SubSystemsManager. For example,
>> I see that the function SubSystemsManager::init_mpi is called, which set
>> variables to take responsibility of MPI, and *after* this the
>> SubSystemsManager constructor is called, which resets that flags. The
>> end result is that MPI is not finalized.
>>
>> I don't see an option other robust option than explicit 'initialise'
>> (and maybe 'finalise') function calls. Any other suggestions?
> 
> I think it should be possible to control the order. 

Unfortunately that's not straightforward. There's a lot on the web about
it. One approach is to use a static pointer and create the object
dynamically, but this then requires an explicit call to destroy the
object. I've looked at this a lot.

> I would very much
> like to avoid explicit init functions.
> 

Ultimately, I think that this will be unavoidable. Explicit calls are
good because its crucial to control the destruction when 'systems' (like
MPI) are running in the background.

> Can you create a simple test that goes wrong and maybe I can look at
> it (but not tonight).
>

Just run anything in parallel (-np 1 is enough). Put a write statement
in SubSystemsManager::init_mpi and SubSystemsManager::SubSystemsManager.

Feel free to try sorting it out. It needs to be fixed quickly, since
we're broken in parallel.

Garth


> --
> Anders
> 
> 
> --
> Anders
> 
> 
>> Garth
>>
>> _______________________________________________
>> Mailing list: https://launchpad.net/~dolfin
>> Post to     : dolfin@xxxxxxxxxxxxxxxxxxx
>> Unsubscribe : https://launchpad.net/~dolfin
>> More help   : https://help.launchpad.net/ListHelp



Follow ups

References