dolfin team mailing list archive
-
dolfin team
-
Mailing list archive
-
Message #15618
Re: [HG DOLFIN] A try to suppress some more annoying mpi valgrind complaints
On Tuesday 22 September 2009 18:18:59 Anders Logg wrote:
> On Tue, Sep 22, 2009 at 03:11:23PM +0200, Johan Hake wrote:
> > On Tuesday 22 September 2009 15:06:22 Garth N. Wells wrote:
> > > Johan Hake wrote:
> > > > On Tuesday 22 September 2009 14:57:16 DOLFIN wrote:
> > > >> One or more new changesets pushed to the primary dolfin repository.
> > > >> A short summary of the last three changesets is included below.
> > > >>
> > > >> changeset: 7134:18c7e560897b
> > > >> tag: tip
> > > >> user: "Johan Hake <hake@xxxxxxxxx>"
> > > >> date: Tue Sep 22 14:57:13 2009 +0200
> > > >> files: test/memory/dolfin_valgrind.supp
> > > >> description:
> > > >> A try to suppress some more annoying mpi valgrind complaints
> > > >
> > > > In the memtest on the jaunty-amd64 buildbot there are some complaints
> > > > related to gts and petsc:
> > > >
> > > > GTS:
> > > >
> > > > 72 bytes in 1 blocks are still reachable in loss record 40 of 67
> > > > at 0x4C278AE: malloc (vg_replace_malloc.c:207)
> > > > by 0x9077A12: g_malloc (in /usr/lib/libglib-2.0.so.0.2000.1)
> > > > by 0x908DB07: g_slice_alloc (in /usr/lib/libglib-2.0.so.0.2000.1)
> > > > by 0x9061812: g_hash_table_new_full (in
> > > > /usr/lib/libglib-2.0.so.0.2000.1) by 0x87D0E82: gts_object_class_new
> > > > (in /usr/lib/libgts-0.7.so.5.0.1) by 0x87D0F3E: gts_object_class (in
> > > > /usr/lib/libgts-0.7.so.5.0.1) by 0x87DCD0D: gts_bbox_class (in
> > > > /usr/lib/libgts-0.7.so.5.0.1) by 0x5043E1E:
> > > > dolfin::GTSInterface::create_box(dolfin::Cell const&)
> > > > (GTSInterface.cpp:186)
> > > > by 0x5043FFD: dolfin::GTSInterface::buildCellTree()
> > > > (GTSInterface.cpp:198) by 0x5044134:
> > > > dolfin::GTSInterface::GTSInterface(dolfin::Mesh const&)
> > > > (GTSInterface.cpp:34)
> > > > by 0x502DDA0:
> > > > dolfin::IntersectionDetector::IntersectionDetector(dolfin::Mesh
> > > > const&) (IntersectionDetector.cpp:30)
> > > > by 0x4F67F67: dolfin::FunctionSpace::eval(double*, double const*,
> > > > dolfin::Function const&) const (FunctionSpace.cpp:122)
> > > >
> > > > +++
> > > >
> > > > PETSc:
> > > >
> > > > 96 bytes in 2 blocks are indirectly lost in loss record 55 of 67
> > > > at 0x4C278AE: malloc (vg_replace_malloc.c:207)
> > > > by 0xC6A5059: (within /usr/lib/openmpi/lib/libmpi.so.0.0.0)
> > > > by 0xC6D1503: PMPI_Attr_put (in
> > > > /usr/lib/openmpi/lib/libmpi.so.0.0.0) by 0x76C6F1D:
> > > > PetscCommDuplicate(ompi_communicator_t*,
> > > > ompi_communicator_t**, int*) (tagm.c:233)
> > > > by 0x764612F: PetscHeaderCreate_Private(_p_PetscObject*, int, int,
> > > > char const*, ompi_communicator_t*, int (*)(_p_PetscObject*), int (*)
> > > > (_p_PetscObject*, _p_PetscViewer*)) (inherit.c:44)
> > > > by 0x7318595: VecCreate(ompi_communicator_t*, _p_Vec**)
> > > > (veccreate.c:39) by 0x7379278: VecCreateSeq(ompi_communicator_t*,
> > > > int, _p_Vec**) (vseqcr.c:38)
> > > > by 0x4FE7B82: dolfin::PETScVector::init(unsigned int, unsigned
> > > > int, std::string) (PETScVector.cpp:520)
> > > > by 0x4FE884B: dolfin::PETScVector::PETScVector(std::string)
> > > > (PETScVector.cpp:50)
> > > > by 0x5016EEE: dolfin::PETScFactory::create_vector() const
> > > > (PETScFactory.cpp:27)
> > > > by 0x4F4C2E8:
> > > > dolfin::VariationalProblem::solve_linear(dolfin::Function&)
> > > > (Vector.h:32) by 0x438CB7: Eval::testArbitraryEval() (test.cpp:73)
> > > >
> > > > +++
> > > >
> > > > Are these real leaks or false positives? Can someone with more
> > > > knowledge of PETSc and/or GTS look at it?
> > >
> > > I believe that the first is a libgts issue. The second I don't know. I
> > > had a look but I don't see any problems.
> > >
> > > The supression files are of limited used because they depend on the
> > > library versions - the valgrind messages I get on my desktop are
> > > slightly different the valgrind messages on the buildbots. This makes
> > > sorting out leaks clumsy.
> >
> > For sure!
> >
> > However there are ways of expressing general suppressions with the use of
> > *. We have tried to do that, but as you say, it is tedious and certainly
> > not fail proof.
> >
> > Should we only run the memtest on one buildbot? The problem are all the
> > different configurations we have, which we then do not test.
>
> I think that would be enough. On the other hand, the buildbot seems to
> be green now on 3 of 5 platforms.
>
> What's up with the other two?
The way OpenMPI works produces a lot of false positive reports when running
valgrind on them. These needs to be suppressed by calling valgrind using a
suppresion file. These files are platform dependent and tedious to compile.
We have manage to make a suppression file for our hardy buildbot, and that is
why this buildbot is green as the memory test pass.
The other linux buildbots does not have a correct suppression file so these
will fail.
Johan
> --
> Anders
>
References