← Back to team overview

dolfin team mailing list archive

Re: PETSC-MPI

 

On Tue, Jun 03, 2008 at 02:35:23PM +0100, Nuno David Lopes wrote:
> Thank you very much.
> This last e-mail made the subject more clear to me.
> In fact i was calling ./app  
> I thougt PETSc should do all the parallel work from the inside of the ./app.
> 
> Still with 
> $mpirun -np 2 ./app  
> (2-core PC)
> i get the same top results.: Cpu0=100% Cpu1=0%

What app are you running? Are you running a simple sequential DOLFIN
program and expecting DOLFIN/PETSc to make it parallel?

At this point, parallel assembly is still experimental (but I hope we
can make it default for v0.9). There is a demo in 

 demo/fem/assembly/

which does parallel assembly.

-- 
Anders


> By the way  i get  the following message from Dolfin:
> _____________________________________________
> Initializing PETSc (ignoring command-line arguments)
> _____________________________________________
> Could this be the reason for PETSc not going into parallel?
> 
> Or could it still be a problem with dolfin compilation?
> 
> P.S:
> Installing mpich with apt-get i get the following apps:
> mpicc
> mpiCC
> mpicxx
> mpif77
> mpif90
> and 
> mpirun 
> 
>  
> On Monday 02 June 2008, Johannes Ring wrote:
> > On Mon, June 2, 2008 Nuno David Lopes wrote:
> > > Finally i could install it:
> > > _________________________________________________________________________
> > > petsc:
> > > $./config/configure.py --with-cc=gcc --with-fc=gfortran\\
> > > --download-umfpack=1 --download-hypre=1 --with-mpi-dir=/usr/lib/mpich
> > > --with-shared=1
> > > dolfin:
> > > $scons enablePetsc=1 --withPetscDir=$PETSC _ DIR --enableMpi=1
> >
> > Note that we don't use dashes when specifying options to scons. The above
> > line should therefore be like this:
> >
> > $scons enablePetsc=1 withPetscDir=$PETSC_DIR enableMpi=1
> >
> > > the --download-mpich=1 in PETSc didn't work because scons::DOLFIN
> > > still didn't find the MPICH stuff.
> >
> > Then the problem is probably that we are unable to locate a MPI C++
> > compiler on your system. As I said earlier, we only look for mpicxx,
> > mpic++, and mpiCC. Using --download-mpich=1 when configuring PETSc, you
> > will only get mpicc which is normally just a MPI C compiler. As far as I
> > can remember, the mpicc compiler you get should also work as a C++
> > compiler. You can try this by defining the CXX environment variable,
> > giving the complete path to the mpicc compiler that was downloaded and
> > installed by PETSc. An alternative is to configure PETSc with the
> > --with-clanguage=C++ option. This should produce a MPI C++ compiler when
> > using --download-mpich=1.
> >
> > > And i've really change the PATHs to the
> > > downloaded mpich dir.
> > > Probably it should be good to have a scons option "withMpiDir= " like for
> > > the
> > > other packages.
> > >
> > > I've also tried with lam or openmpi...
> > >
> > > But still I can't see the PETSc parallel system solving in my codes?
> >
> > Did you start your codes with mpirun or mpiexec? If you just do, e.g.,
> >
> >   ./app
> >
> > when starting your application, you won't be running in parallel. Try with
> >
> >   mpirun -np 2 ./app
> >
> > or
> >
> >   mpiexec -np 2 ./app
> >
> > This should run two copies of your program in parallel.
> >
> > Johannes
> >
> > > (I really don't know much about parallel algorithms...so sorry if i'm
> > > saying
> > > something that doesn't make sense)
> > > but what i get when i run top at the time the program is solving a system
> > > is
> > > still a
> > > --------------------
> > > Cpu0 : 0.0%
> > > Cpu1 :100.%
> > > -------------------
> > > it seems that petsc isn't allocating the two cores at the same time...so
> > > it
> > > isn't really parallel, and there aren't really any advantage on it.
> > > Is this a know issue? Like an mpich bug or something..
> > > Should i try again with opemmpi like was suggested?
> > >
> > > Thanks
> > >
> > > On Monday 02 June 2008, Jed Brown wrote:
> > >> On Mon 2008-06-02 08:29, Johannes Ring wrote:
> > >> > On Fri, May 30, 2008 Nuno David Lopes wrote:
> > >> > > Ok, i have mpich on my system, petsc (without hypre)  compiled
> > >> > > with --download-mpich=1, it all works, my code compiles and runs
> > >> > > etc.... Everything is running with the exception that MPI isn't
> > >> > > working...
> > >> > >
> > >> > > ..................scons: Reading SConscript files ...
> > >> > > Using options from scons/options.cache
> > >> > > MPI not found (might not work if PETSc uses MPI).
> > >> > > ....................................................................
> > >> > >... .
> > >> >
> > >> > The reason you get this warning is that we are unable to locate mpirun
> > >>
> > >> or
> > >>
> > >> > an MPI C++ compiler (we look for mpic++, mpicxx, and mpiCC) on your
> > >> > system. This might be a bit limiting, so I added mpiexec and orterun
> > >>
> > >> as
> > >>
> > >> > alternatives for mpirun. Please try again with the latest from the hg
> > >> > repository.
> > >>
> > >> When PETSc installs mpich, it doesn't automatically put it in your path
> > >> so
> > >> you will have to specify the path explicitly when you configure Dolfin.
> > >>
> > >> In terms of Dolfin configuration, it's not really a good idea to just
> > >> use
> > >> any random MPI implementation that may be in your path.  You can get the
> > >> correct implementation from PETSc (CC,CXX,MPIEXEC in
> > >> $PETSC_DIR/bmake/$PETSC_ARCH/petscconf).  This would be especially nice
> > >> if
> > >> you have multiple PETSc builds with different MPI implementations (i.e.
> > >> OpenMPI with a debugging build for development/testing and vendor MPI
> > >> with
> > >> optimized build for production runs).
> > >>
> > >> Jed
> > >
> > >
> > > e-mail:ndl@xxxxxxxxxxxxxx        (FCUL/CMAF)
> > >            nlopes@xxxxxxxxxxxxxxx    (ISEL)
> > > http://ptmat.ptmat.fc.ul.pt/%7Endl/
> > > _______________________________________________
> > > DOLFIN-dev mailing list
> > > DOLFIN-dev@xxxxxxxxxx
> > > http://www.fenics.org/mailman/listinfo/dolfin-dev
> >
> > _______________________________________________
> > DOLFIN-dev mailing list
> > DOLFIN-dev@xxxxxxxxxx
> > http://www.fenics.org/mailman/listinfo/dolfin-dev
> 
> 
> 


Follow ups

References