← Back to team overview

dolfin team mailing list archive

Re: Slow PetscInitialize?

 

On Sun, Oct 14, 2012 at 4:08 PM, Anders Logg <logg@xxxxxxxxx> wrote:
> On Sun, Oct 14, 2012 at 02:44:09PM +0100, Garth N. Wells wrote:
>> You cannot
>>
>>   (a) rely on machines that manage dynamically the CPU clock speed and
>> spin down the hard disk, especially laptops, to generate reliable
>> (short) timings.
>
> It's usually quite ok for doing that. I use cpufreq-set to set the
> highest clock rate and cpufreq-selector to set the CPU 'governor' to
> 'performance'.
>
>>   (b) get a tuned installation with Dorsal
>>
>> My times (not using Dorsal) are:
>>
>>     Init MPI                     |      0.018783    0.018783     1
>>     Init PETSc                 |    0.00043392  0.00043392     1
>>     Init dof vector             |    0.00064111  0.00064111     1
>
> Somewhat better now after rebuilding with newly built petsc-3.3-p3:
>
>   Init PETSc                  |      0.026619    0.026619     1
>
> But still 2 orders of magnitude worse than for you and Johannes.
>
> I don't see why it needs to be worse than Dorsal. The only difference
> is that the Dorsal script is typing out the build commands instead of
> doing it manually. So it should just be a matter of which options
> Dorsal is using.
>
> Dorsal is using these options:
>
> COPTFLAGS=-O2
>           --with-debugging=0 --with-shared-libraries=1
>           --with-clanguage=cxx --with-c-support=1"
>
> for external_pkg in umfpack hypre mumps scalapack blacs ptscotch
> scotch metis parmetis; do
>     CONFOPTS="${CONFOPTS} --download-${external_pkg}=1"
> done
>
> Any of these that should be avoided? Any that should be added for a
> better PETSc build?
>

You also have MPI, to which PETSc will make calls. I haven't used the
Ubuntu OpenMPI package for a long time because I kept finding bugs in
it (and it doesn't support threads), so now I just build my own MPI
and don't bother with the MPI package.

Garth

> --
> Anders


Follow ups

References