Anders Logg wrote:
> On Fri, Jan 04, 2008 at 08:12:48PM +0000, Garth N. Wells wrote:
>>
>> Anders Logg wrote:
>>> On Fri, Jan 04, 2008 at 06:39:39PM +0000, Garth N. Wells wrote:
>>>> Anders Logg wrote:
>>>>> On Fri, Jan 04, 2008 at 06:25:11PM +0000, Garth N. Wells wrote:
>>>>>> Anders Logg wrote:
>>>>>>> On Fri, Jan 04, 2008 at 06:12:32PM +0000, Garth N. Wells wrote:
>>>>>>>> Anders Logg wrote:
>>>>>>>>> On Fri, Jan 04, 2008 at 05:50:53PM +0000, Garth N. Wells
wrote:
>>>>>>>>>> Anders Logg wrote:
>>>>>>>>>>> On Fri, Jan 04, 2008 at 10:25:35AM -0600, Matthew Knepley
wrote:
>>>>>>>>>>>> On Jan 4, 2008 10:22 AM, Garth N. Wells <gnw20@xxxxxxxxx>
wrote:
>>>>>>>>>>>>> We have a problem at the moment when using PETSc related
to conflicts
>>>>>>>>>>>>> between dolfin::MPIManager and dolfin::PETScManger as to
who should
>>>>>>>>>>>>> initialise and finalise MPI. The difficulty is that we
can't control the
>>>>>>>>>>>>> order in which MPIManager and PETScManager are destroyed.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Given that we will probably have more 'manager' classes
in the future
>>>>>>>>>>>>> (e.g. for Trilinos), what is the best approach? Some
possibilities are
>>>>>>>>>>>>> one 'Manager' class that does the lot, or a
SingletonManager which can
>>>>>>>>>>>>> control the order in which singleton manager classes are
destroyed. Ideas?
>>>>>>>>>>>> When using multiple packages with MPI, you should always
have a single MPI
>>>>>>>>>>>> manage class. If MPI is already initialized when PETSc is
initialized,
>>>>>>>>>>>> it won't mess
>>>>>>>>>>>> with MPI (and won't finalize it either).
>>>>>>>>>>>>
>>>>>>>>>>>> Matt
>>>>>>>>>>> Good, if it's always the case that PETSc does not finalize
when it has
>>>>>>>>>>> not initialized, then maybe we just need to do the same?
>>>>>>>>>>>
>>>>>>>>>>> As we have implemented it, MPIManager checks if someone
else (maybe
>>>>>>>>>>> PETSc, maybe someone else) has already initialized MPI and
in that
>>>>>>>>>>> case does not initialize it. Then it should also assume
that someone
>>>>>>>>>>> else will finalize it.
>>>>>>>>>>>
>>>>>>>>>>> We can just add a bool member initialized_here and then do
>>>>>>>>>>>
>>>>>>>>>>> if (initialized_here)
>>>>>>>>>>> MPIManager::finalize();
>>>>>>>>>>>
>>>>>>>>>>> in the destructor of MPIManager.
>>>>>>>>>>>
>>>>>>>>>>> Would that help?
>>>>>>>>>>>
>>>>>>>>>> Unfortunately not.
>>>>>>>>>>
>>>>>>>>>> The problem is that MPIManager might finalize MPI before
PETSc has been
>>>>>>>>>> finalised. If MPI is initialised before PETSc, then PETSc
should be
>>>>>>>>>> finalised before MPI is finalised. The problem is that we
have no
>>>>>>>>>> control over the order in which MPIManager and PETScManager
are destroyed.
>>>>>>>>> ok.
>>>>>>>>>
>>>>>>>>>> I'm thinking of a singleton class DolfinManager which is
responsible for
>>>>>>>>>> the creation and destruction of various managers in the
appropriate order.
>>>>>>>>>>
>>>>>>>>>> Garth
>>>>>>>>> Perhaps it would be better to have a single class that takes
care of
>>>>>>>>> all initializations (rather than having a class that takes
care of
>>>>>>>>> manager classes) to keep things simple?
>>>>>>>>>
>>>>>>>> ok.
>>>>>>>>
>>>>>>>>> We could put a class named Init (for example) in
src/kernel/main/
>>>>>>>>> with some static functions:
>>>>>>>>>
>>>>>>>>> static void init(int argc, char* argv[]);
>>>>>>>>>
>>>>>>>>> static void initPETSc();
>>>>>>>>> static void initPETSc(int argc, char* argv[]);
>>>>>>>>>
>>>>>>>>> static void initMPI();
>>>>>>>>>
>>>>>>>>> We can then remove init.cpp, init.h and also PETScManager.
>>>>>>>>>
>>>>>>>> ok.
>>>>>>>>
>>>>>>>>> MPIManager can be renamed to MPI and just contain MPI utility
>>>>>>>>> functions (like everything in MPIManager now except
init/finalize).
>>>>>>>>>
>>>>>>>> What about calling it DolfinManager as it won't be strictly
for MPI?
>>>>>>>> Without MPI, we still need to initialise PETSc.
>>>>>>>>
>>>>>>>> Garth
>>>>>>> The thing I suggest calling MPI is strictly for MPI (the things
>>>>>>> currently in MPIManager except init/finalize).
>>>>>>>
>>>>>> Do you still propose having a class MPI?
>>>>> Yes, and it contains the following things:
>>>>>
>>>>> /// Return proccess number
>>>>> static uint processNumber();
>>>>>
>>>>> /// Return number of processes
>>>>> static uint numProcesses();
>>>>>
>>>>> /// Determine whether we should broadcast (based on current
parallel policy)
>>>>> static bool broadcast();
>>>>>
>>>>> /// Determine whether we should receive (based on current
parallel policy)
>>>>> static bool receive();
>>>>>
>>>>>>> I agree the class that manages initialization should be called
>>>>>>> something neutral. I'm not very fond of "DolfinManager" since
(1)
>>>>>>> maybe it should then be DOLFINManager (which is maybe not very
nice)
>>>>>>> and (2) it is not something that manages DOLFIN.
>>>>>>>
>>>>>> Suggestions? SubSystemManager?
>>>>> Sounds good.
>>>>>
>>>>> For simplicity, we could also have a class named PETSc with a
static
>>>>> init() function that would just call SubSystemManager. Since we
need
>>>>> to call this in all PETSc data structures, it's convenient if we
can
>>>>> write
>>>>>
>>>>> PETSc::init()
>>>>>
>>>>> rather than
>>>>>
>>>>> SubSystemManager::initPETSc();
>>>>>
>>>>> Similarly, we can have an init() function in the class MPI that
calls
>>>>> SubSystemManager::initMPI().
>>>>>
>>>>> So three classes:
>>>>>
>>>>> SubSystemManager: singleton manager of all subsystem with some
>>>>> logic for the order of initialization/finalization
>>>>>
>>>>> MPI: MPI convenience class
>>>>>
>>>>> PETSc: PETSc convenience class
>>>>>
>>>>>
>>>> OK. I'll add something. We could just let
>>>>
>>>> SubSystemManager::init();
>>>>
>>>> take care of all the initialisation.
>>>>
>>>> Garth
>>> ok, but shouldn't we be able to initialize one but not the other
(of MPI
>>> and PETSc)?
>>>
>> Yes. We have compiler flags that tell what is needed.
>>
>> Garth
>
> I mean even if one has both MPI and PETSc installed, then one might
> want to use MPI (for the mesh) without initializing PETSc, and one
> might want to use PETSc without initializing MPI (if possible).
>
Yes, it is desirable to be able to do this. In practice at the moment
this is pretty much determined by the compiler flags. Once we have a
class SubSystemManager, we will be pretty flexible in what we can do.
Garth