← Back to team overview

dolfin team mailing list archive

Re: PETSC-MPI

 

Finally i could install it:
_________________________________________________________________________
petsc:
$./config/configure.py --with-cc=gcc --with-fc=gfortran\\ 
--download-umfpack=1 --download-hypre=1 --with-mpi-dir=/usr/lib/mpich --with-shared=1
dolfin:
$scons enablePetsc=1 --withPetscDir=$PETSC _ DIR --enableMpi=1
________________________________________________________________________
 
the --download-mpich=1 in PETSc didn't work because scons::DOLFIN
still didn't find the MPICH stuff. And i've really change the PATHs to the 
downloaded mpich dir. 
Probably it should be good to have a scons option "withMpiDir= " like for the 
other packages.

I've also tried with lam or openmpi...

But still I can't see the PETSc parallel system solving in my codes? 
(I really don't know much about parallel algorithms...so sorry if i'm saying 
something that doesn't make sense)
but what i get when i run top at the time the program is solving a system is 
still a 
--------------------
Cpu0 : 0.0%
Cpu1 :100.%
------------------- 
it seems that petsc isn't allocating the two cores at the same time...so it 
isn't really parallel, and there aren't really any advantage on it.
Is this a know issue? Like an mpich bug or something..
Should i try again with opemmpi like was suggested?

Thanks
 


On Monday 02 June 2008, Jed Brown wrote:
> On Mon 2008-06-02 08:29, Johannes Ring wrote:
> > On Fri, May 30, 2008 Nuno David Lopes wrote:
> > > Ok, i have mpich on my system, petsc (without hypre)  compiled
> > > with --download-mpich=1, it all works, my code compiles and runs
> > > etc.... Everything is running with the exception that MPI isn't
> > > working...
> > >
> > > ..................scons: Reading SConscript files ...
> > > Using options from scons/options.cache
> > > MPI not found (might not work if PETSc uses MPI).
> > > .......................................................................
> > >.
> >
> > The reason you get this warning is that we are unable to locate mpirun or
> > an MPI C++ compiler (we look for mpic++, mpicxx, and mpiCC) on your
> > system. This might be a bit limiting, so I added mpiexec and orterun as
> > alternatives for mpirun. Please try again with the latest from the hg
> > repository.
>
> When PETSc installs mpich, it doesn't automatically put it in your path so
> you will have to specify the path explicitly when you configure Dolfin.
>
> In terms of Dolfin configuration, it's not really a good idea to just use
> any random MPI implementation that may be in your path.  You can get the
> correct implementation from PETSc (CC,CXX,MPIEXEC in
> $PETSC_DIR/bmake/$PETSC_ARCH/petscconf).  This would be especially nice if
> you have multiple PETSc builds with different MPI implementations (i.e.
> OpenMPI with a debugging build for development/testing and vendor MPI with
> optimized build for production runs).
>
> Jed



-- 
Nuno David Lopes

e-mail:ndl@xxxxxxxxxxxxxx        (FCUL/CMAF)
           nlopes@xxxxxxxxxxxxxxx    (ISEL)
http://ptmat.ptmat.fc.ul.pt/%7Endl/


Follow ups

References