← Back to team overview

dolfin team mailing list archive

Re: [Question #143963]: fluid flow - memory problems

 

Question #143963 on DOLFIN changed:
https://answers.launchpad.net/dolfin/+question/143963

    Status: Answered => Open

Till B is still having a problem:
I made the mesh with gmsh. It contains 227792 tetrahedra.

In the program I do nothing more than apply boundary conditions, define the
variational problem and solve it. I took the iterative stokes demo and just
edited the definition of the mesh and added my boundary conditions.
Similarly to the other stokes demos I already prepared the subdomains in a
separate xml file.

I accept that 1GB RAM is not much. But is there an amount of RAM that is
sufficient, no matter how large the mesh is? When I first asked about my
memory problems, I understood that the iterative solver has an upper bound
for RAM consumption.


2011/2/14 Anders Logg <question143963@xxxxxxxxxxxxxxxxxxxxx>

> Your question #143963 on DOLFIN changed:
> https://answers.launchpad.net/dolfin/+question/143963
>
>     Status: Open => Answered
>
> Anders Logg proposed the following answer:
> How large is your mesh? What kind of mesh is it? Triangles or
> tetrahedra?
>
> The storage of the mesh itself is very efficient in DOLFIN but
> depending on what other things you do in your program, you may well
> run out of memory. 1GB is not that much.
>
> --
> Anders
>
>
> On Mon, Feb 14, 2011 at 11:49:36AM -0000, Till B wrote:
> > Question #143963 on DOLFIN changed:
> > https://answers.launchpad.net/dolfin/+question/143963
> >
> > Till B posted a new comment:
> > Hi again!
> >
> > So I finally got XUbuntu running on a virtual machine. I installed Fenics
> from the Fenics ppa repositories. Now the iterative Stokes demo works. But
> when I put my own mesh in, I get memory problems again. In addition Petsc
> produces errors. My virtual machine has around 1.2 gig RAM, more is not
> possible. Does this mean that Fenics is in general not suitable for larger
> meshes?
> > Here is the output:
> >
> > till@virtual-python:~/Desktop/sf_winshared/test$ python demo.py
> > Assembling linear system and applying boundary conditions...
> > [0]PETSC ERROR: --------------------- Error Message
> ------------------------------------
> > [0]PETSC ERROR: Out of memory. This could be due to allocating
> > [0]PETSC ERROR: too large an object or bleeding by not properly
> > [0]PETSC ERROR: destroying unneeded objects.
> > [0]PETSC ERROR: Memory allocated 0 Memory used by process 878977024
> > [0]PETSC ERROR: Try running with -malloc_dump or -malloc_log for info.
> > [0]PETSC ERROR: Memory requested 1131238140!
> > [0]PETSC ERROR:
> ------------------------------------------------------------------------
> > [0]PETSC ERROR: Petsc Release Version 3.0.0, Patch 10, Tue Nov 24
> 16:38:09 CST 2009
> > [0]PETSC ERROR: See docs/changes/index.html for recent updates.
> > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
> > [0]PETSC ERROR: See docs/index.html for manual pages.
> > [0]PETSC ERROR:
> ------------------------------------------------------------------------
> > [0]PETSC ERROR: Unknown Name on a linux-gnu named virtual-python by till
> Mon Feb 14 10:56:04 2011
> > [0]PETSC ERROR: Libraries linked from
> /build/buildd/petsc-3.0.0.dfsg/linux-gnu-c-opt/lib
> > [0]PETSC ERROR: Configure run at Thu Dec 31 09:53:25 2009
> > [0]PETSC ERROR: Configure options --with-shared --with-debugging=0
> --useThreads 0 --with-fortran-interfaces=1 --with-mpi-dir=/usr/lib/openmpi
> --with-mpi-shared=1 --with-blas-lib=-lblas-3gf
> --with-lapack-lib=-llapackgf-3 --with-umfpack=1
> --with-umfpack-include=/usr/include/suitesparse
> --with-umfpack-lib="[/usr/lib/libumfpack.so,/usr/lib/libamd.so]"
> --with-superlu=1 --with-superlu-include=/usr/include/superlu
> --with-superlu-lib=/usr/lib/libsuperlu.so --with-spooles=1
> --with-spooles-include=/usr/include/spooles
> --with-spooles-lib=/usr/lib/libspooles.so --with-hypre=1
> --with-hypre-dir=/usr --with-scotch=1
> --with-scotch-include=/usr/include/scotch
> --with-scotch-lib=/usr/lib/libscotch.so
> > [0]PETSC ERROR:
> ------------------------------------------------------------------------
> > [0]PETSC ERROR: PetscMallocAlign() line 61 in src/sys/memory/mal.c
> > [0]PETSC ERROR: MatSeqAIJSetPreallocation_SeqAIJ() line 2986 in
> src/mat/impls/aij/seq/aij.c
> > [0]PETSC ERROR: MatCreateSeqAIJ() line 2863 in
> src/mat/impls/aij/seq/aij.c
> > [0]PETSC ERROR: --------------------- Error Message
> ------------------------------------
> > [0]PETSC ERROR: Null argument, when expecting valid pointer!
> > [0]PETSC ERROR: Trying to zero at a null pointer!
> > [0]PETSC ERROR:
> ------------------------------------------------------------------------
> > [0]PETSC ERROR: Petsc Release Version 3.0.0, Patch 10, Tue Nov 24
> 16:38:09 CST 2009
> > [0]PETSC ERROR: See docs/changes/index.html for recent updates.
> > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
> > [0]PETSC ERROR: See docs/index.html for manual pages.
> > [0]PETSC ERROR:
> ------------------------------------------------------------------------
> > [0]PETSC ERROR: Unknown Name on a linux-gnu named virtual-python by till
> Mon Feb 14 10:56:04 2011
> > [0]PETSC ERROR: Libraries linked from
> /build/buildd/petsc-3.0.0.dfsg/linux-gnu-c-opt/lib
> > [0]PETSC ERROR: Configure run at Thu Dec 31 09:53:25 2009
> > [0]PETSC ERROR: Configure options --with-shared --with-debugging=0
> --useThreads 0 --with-fortran-interfaces=1 --with-mpi-dir=/usr/lib/openmpi
> --with-mpi-shared=1 --with-blas-lib=-lblas-3gf
> --with-lapack-lib=-llapackgf-3 --with-umfpack=1
> --with-umfpack-include=/usr/include/suitesparse
> --with-umfpack-lib="[/usr/lib/libumfpack.so,/usr/lib/libamd.so]"
> --with-superlu=1 --with-superlu-include=/usr/include/superlu
> --with-superlu-lib=/usr/lib/libsuperlu.so --with-spooles=1
> --with-spooles-include=/usr/include/spooles
> --with-spooles-lib=/usr/lib/libspooles.so --with-hypre=1
> --with-hypre-dir=/usr --with-scotch=1
> --with-scotch-include=/usr/include/scotch
> --with-scotch-lib=/usr/lib/libscotch.so
> > [0]PETSC ERROR:
> ------------------------------------------------------------------------
> > [0]PETSC ERROR: PetscMemzero() line 189 in src/sys/utils/memc.c
> > [0]PETSC ERROR: MatZeroEntries_SeqAIJ() line 727 in
> src/mat/impls/aij/seq/aij.c
> > [0]PETSC ERROR: MatZeroEntries() line 4796 in src/mat/interface/matrix.c
> > [0]PETSC ERROR: --------------------- Error Message
> ------------------------------------
> > [0]PETSC ERROR: Null argument, when expecting valid pointer!
> > [0]PETSC ERROR: Trying to zero at a null pointer!
> > [0]PETSC ERROR:
> ------------------------------------------------------------------------
> > [0]PETSC ERROR: Petsc Release Version 3.0.0, Patch 10, Tue Nov 24
> 16:38:09 CST 2009
> > [0]PETSC ERROR: See docs/changes/index.html for recent updates.
> > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
> > [0]PETSC ERROR: See docs/index.html for manual pages.
> > [0]PETSC ERROR:
> ------------------------------------------------------------------------
> > [0]PETSC ERROR: Unknown Name on a linux-gnu named virtual-python by till
> Mon Feb 14 10:56:04 2011
> > [0]PETSC ERROR: Libraries linked from
> /build/buildd/petsc-3.0.0.dfsg/linux-gnu-c-opt/lib
> > [0]PETSC ERROR: Configure run at Thu Dec 31 09:53:25 2009
> > [0]PETSC ERROR: Configure options --with-shared --with-debugging=0
> --useThreads 0 --with-fortran-interfaces=1 --with-mpi-dir=/usr/lib/openmpi
> --with-mpi-shared=1 --with-blas-lib=-lblas-3gf
> --with-lapack-lib=-llapackgf-3 --with-umfpack=1
> --with-umfpack-include=/usr/include/suitesparse
> --with-umfpack-lib="[/usr/lib/libumfpack.so,/usr/lib/libamd.so]"
> --with-superlu=1 --with-superlu-include=/usr/include/superlu
> --with-superlu-lib=/usr/lib/libsuperlu.so --with-spooles=1
> --with-spooles-include=/usr/include/spooles
> --with-spooles-lib=/usr/lib/libspooles.so --with-hypre=1
> --with-hypre-dir=/usr --with-scotch=1
> --with-scotch-include=/usr/include/scotch
> --with-scotch-lib=/usr/lib/libscotch.so
> > [0]PETSC ERROR:
> ------------------------------------------------------------------------
> > [0]PETSC ERROR: PetscMemzero() line 189 in src/sys/utils/memc.c
> > [0]PETSC ERROR: MatZeroEntries_SeqAIJ() line 727 in
> src/mat/impls/aij/seq/aij.c
> > [0]PETSC ERROR: MatZeroEntries() line 4796 in src/mat/interface/matrix.c
> > Computing Dirichlet boundary values, topological search [=>           ]
> 8.2%
> > Computing Dirichlet boundary values, topological search [==>          ]
> 16.5%
> > Computing Dirichlet boundary values, topological search [===>         ]
> 28.9%
> > Computing Dirichlet boundary values, topological search [=====>       ]
> 41.2%
> > Computing Dirichlet boundary values, topological search [======>      ]
> 53.6%
> > Computing Dirichlet boundary values, topological search [========>    ]
> 65.9%
> > Computing Dirichlet boundary values, topological search [==========>  ]
> 78.3%
> > Computing Dirichlet boundary values, topological search [===========> ]
> 90.7%
> > Computing Dirichlet boundary values, topological search [=============]
> 100.0%
> > Computing Dirichlet boundary values, topological search [=============]
> 100.0%
> > [0]PETSC ERROR:
> ------------------------------------------------------------------------
> > [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation,
> probably memory access out of range
> > [0]PETSC ERROR: Try option -start_in_debugger or
> -on_error_attach_debugger
> > [0]PETSC ERROR: or see
> http://www.mcs.anl.gov/petsc/petsc-as/documentation/troubleshooting.html#Signal[0]PETSCERROR: or try
> http://valgrind.org on linux or man libgmalloc on Apple to find memory
> corruption errors
> > [0]PETSC ERROR: configure using --with-debugging=yes, recompile, link,
> and run
> > [0]PETSC ERROR: to get more information on the crash.
> > [0]PETSC ERROR: --------------------- Error Message
> ------------------------------------
> > [0]PETSC ERROR: Signal received!
> > [0]PETSC ERROR:
> ------------------------------------------------------------------------
> > [0]PETSC ERROR: Petsc Release Version 3.0.0, Patch 10, Tue Nov 24
> 16:38:09 CST 2009
> > [0]PETSC ERROR: See docs/changes/index.html for recent updates.
> > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
> > [0]PETSC ERROR: See docs/index.html for manual pages.
> > [0]PETSC ERROR:
> ------------------------------------------------------------------------
> > [0]PETSC ERROR: Unknown Name on a linux-gnu named virtual-python by till
> Mon Feb 14 10:56:04 2011
> > [0]PETSC ERROR: Libraries linked from
> /build/buildd/petsc-3.0.0.dfsg/linux-gnu-c-opt/lib
> > [0]PETSC ERROR: Configure run at Thu Dec 31 09:53:25 2009
> > [0]PETSC ERROR: Configure options --with-shared --with-debugging=0
> --useThreads 0 --with-fortran-interfaces=1 --with-mpi-dir=/usr/lib/openmpi
> --with-mpi-shared=1 --with-blas-lib=-lblas-3gf
> --with-lapack-lib=-llapackgf-3 --with-umfpack=1
> --with-umfpack-include=/usr/include/suitesparse
> --with-umfpack-lib="[/usr/lib/libumfpack.so,/usr/lib/libamd.so]"
> --with-superlu=1 --with-superlu-include=/usr/include/superlu
> --with-superlu-lib=/usr/lib/libsuperlu.so --with-spooles=1
> --with-spooles-include=/usr/include/spooles
> --with-spooles-lib=/usr/lib/libspooles.so --with-hypre=1
> --with-hypre-dir=/usr --with-scotch=1
> --with-scotch-include=/usr/include/scotch
> --with-scotch-lib=/usr/lib/libscotch.so
> > [0]PETSC ERROR:
> ------------------------------------------------------------------------
> > [0]PETSC ERROR: User provided function() line 0 in unknown directory
> unknown file
> >
> --------------------------------------------------------------------------
> > MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
> > with errorcode 59.
> >
> > NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
> > You may or may not see output from other processes, depending on
> > exactly when Open MPI kills them.
> >
> --------------------------------------------------------------------------
> >
> > You received this question notification because you are a member of
> > DOLFIN Team, which is an answer contact for DOLFIN.
> >
> > _______________________________________________
> > Mailing list: https://launchpad.net/~dolfin
> > Post to     : dolfin@xxxxxxxxxxxxxxxxxxx
> > Unsubscribe : https://launchpad.net/~dolfin
> > More help   : https://help.launchpad.net/ListHelp
>
> --
> If this answers your question, please go to the following page to let us
> know that it is solved:
> https://answers.launchpad.net/dolfin/+question/143963/+confirm?answer_id=18
>
> If you still need help, you can reply to this email or go to the
> following page to enter your feedback:
> https://answers.launchpad.net/dolfin/+question/143963
>
> You received this question notification because you are a direct
> subscriber of the question.
>

You received this question notification because you are a member of
DOLFIN Team, which is an answer contact for DOLFIN.



Follow ups

References