dolfin team mailing list archive
-
dolfin team
-
Mailing list archive
-
Message #08139
Re: PETSC-MPI
On Wed, Jun 04, 2008 at 11:18:12AM +0100, Nuno David Lopes wrote:
> On Tuesday 03 June 2008, Anders Logg wrote:
> > On Tue, Jun 03, 2008 at 02:35:23PM +0100, Nuno David Lopes wrote:
> > > Thank you very much.
> > > This last e-mail made the subject more clear to me.
> > > In fact i was calling ./app
> > > I thougt PETSc should do all the parallel work from the inside of the
> > > ./app.
> > >
> > > Still with
> > > $mpirun -np 2 ./app
> > > (2-core PC)
> > > i get the same top results.: Cpu0=100% Cpu1=0%
> >
> > What app are you running? Are you running a simple sequential DOLFIN
> > program and expecting DOLFIN/PETSc to make it parallel?
>
> I was only expecting that when the KrylovSolver is called inside the dolfin
> application then it went paralle?! Not on the rest of the code.
> The PETSc linear system solving algorithms aren't parallel?
> Isn't that the essential point of PETSc?
Yes, but don't blame PETSc. For this to work, PETSc needs to get the
correct instructions (from DOLFIN) which it currently does not.
> As a I said before i really don't know much of parallel computation
> algorithms, but i was expecting some speed improvement if we
> use a parallel solver.
> (I'm at the point that assembly in parallel makes sense, parallel Linear
> System solving isn't clear form me).
>
> Ok I know that PETSc/hypre provides a good set of preconditioners and that is
> an advantage over the uBlas backend.
> But if we exclude the preconditioners, I get PETSc::(gmres,ilu) slower than
> uBlas::(gmres,ilu).
> I didn't test other preconditioners yet....I've tried amg but on my 2Gb PC the
> memory wasn't enough...for the 566000x566000 system i'm testing.
>
>
> >
> > At this point, parallel assembly is still experimental (but I hope we
> > can make it default for v0.9). There is a demo in
> >
> > demo/fem/assembly/
> >
> > which does parallel assembly.
>
> And for this we need scotch right?
>
>
> By the way i've modified/simplified,
> the VTKFile.cpp/h and created an RAWFile.cpp/h for a raw format that is
> readable for instance in XD3D software.(it reads one file for mesh and other
> for the solution).
> I think its a good solution if we are working with one or few meshes, when
> compared with the number of file solutions.
> It is much lighter than the standard vtk, xml formats, but also with less
> information, we only save the solutions.
> Is it of any interest?...i'm testing it.
Submit it as a hg bundle (see the manual).
New output formats are always welcome, as long as they are maintained
and in active use (or they will be removed at some point).
--
Anders
Attachment:
signature.asc
Description: Digital signature
Follow ups
References