← Back to team overview

dolfin team mailing list archive

[Question #139504]: RuntimeError: maximum recursion depth exceeded when ploting

 

Question #139504 on DOLFIN changed:
https://answers.launchpad.net/dolfin/+question/139504

Description changed to:
Hi,

I have the following 2 functions to plot the carrier concentration from
the eigenstates of the schrodinger equation:

def _carrier_distribution(self,r, c, rx, cx, Q, T=300):
        E = sqrt(r**2+c**2)
        ru = Function(Q, rx)
        cu = Function(Q, cx)
        
        #probability distribution in space
        pE = dot(ru, ru) + dot(cu, cu)
        
        # occupation of state E
        m = Constant(1.0) #self.problem.effective_mass(Q)
        EF = Constant(0.0)
        nE = (m/pi*hbar**2)*k*T*ln(1+exp((EF-E)/(k*T)))
        
        #plot(pE*nE)
        return pE*nE
        
    def carrier_distribution(self, state, Q, T=300):
        n = None
        for k in range(0,state.get_number_converged()):
            r, c, rx, cx = state.get_eigenpair(k)
            
            nk = self._carrier_distribution(r, c, rx, cx, Q, T)

            #plot(nk)
            if n is None:
                n = nk
            else:
                n = n + nk
                plot(n) #this does not work
            
        return n

I can plot the individual nk in the carrier_distribution function but
when I try to plot n (the superposition of the nk's) I get the error
below. If I wait and only try to plot the final "n" I get "RuntimeError:
maximum recursion depth exceeded...". Any help will be appreciated. I am
using the ppa version of dolfin.


Eigenvalue solver (krylovschur) converged in 5 iterations.
Object cannot be plotted directly, projecting to piecewise linears.
[0]PETSC ERROR: ------------------------------------------------------------------------
[0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range
[0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger
[0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/petsc-as/documentation/troubleshooting.html#Signal[0]PETSC ERROR: or try http://valgrind.org on linux or man libgmalloc on Apple to find memory corruption errors
[0]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run 
[0]PETSC ERROR: to get more information on the crash.
[0]PETSC ERROR: --------------------- Error Message ------------------------------------
[0]PETSC ERROR: Signal received!
[0]PETSC ERROR: ------------------------------------------------------------------------
[0]PETSC ERROR: Petsc Release Version 3.0.0, Patch 10, Tue Nov 24 16:38:09 CST 2009
[0]PETSC ERROR: See docs/changes/index.html for recent updates.
[0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
[0]PETSC ERROR: See docs/index.html for manual pages.
[0]PETSC ERROR: ------------------------------------------------------------------------
[0]PETSC ERROR: Unknown Name on a linux-gnu named affouda by chaffra Wed Dec 29 10:23:18 2010
[0]PETSC ERROR: Libraries linked from /build/buildd/petsc-3.0.0.dfsg/linux-gnu-c-opt/lib
[0]PETSC ERROR: Configure run at Thu Dec 31 09:53:16 2009
[0]PETSC ERROR: Configure options --with-shared --with-debugging=0 --useThreads 0 --with-fortran-interfaces=1 --with-mpi-dir=/usr/lib/openmpi --with-mpi-shared=1 --with-blas-lib=-lblas-3gf --with-lapack-lib=-llapackgf-3 --with-umfpack=1 --with-umfpack-include=/usr/include/suitesparse --with-umfpack-lib="[/usr/lib/libumfpack.so,/usr/lib/libamd.so]" --with-superlu=1 --with-superlu-include=/usr/include/superlu --with-superlu-lib=/usr/lib/libsuperlu.so --with-spooles=1 --with-spooles-include=/usr/include/spooles --with-spooles-lib=/usr/lib/libspooles.so --with-hypre=1 --with-hypre-dir=/usr --with-scotch=1 --with-scotch-include=/usr/include/scotch --with-scotch-lib=/usr/lib/libscotch.so
[0]PETSC ERROR: ------------------------------------------------------------------------
[0]PETSC ERROR: User provided function() line 0 in unknown directory unknown file
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD 
with errorcode 59.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------

You received this question notification because you are a member of
DOLFIN Team, which is an answer contact for DOLFIN.