← Back to team overview

touch-packages team mailing list archive

[Bug 1352718] Re: Unknown memory utilization in Ubuntu14.04 Trusty

 

[Expired for procps (Ubuntu) because there has been no activity for 60
days.]

** Changed in: procps (Ubuntu)
       Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to procps in Ubuntu.
https://bugs.launchpad.net/bugs/1352718

Title:
  Unknown memory utilization in Ubuntu14.04 Trusty

Status in procps package in Ubuntu:
  Expired

Bug description:
  I'm running Ubuntu Trusty 14.04 on a new machine with 8GB of RAM, and
  it seems to be locking up periodically and nothing is in syslog file.
  I've installed Nagios and have been watching the graphs, and it looks
  like memory is going high from 7% to 72% in just a span of 10 mins.
  Only node process are running on server. In top I found all process
  are running very normal memory consumption. Even after stopping node
  process. Memory remains with same utilization.


  free agrees, claiming I'm using more than 5.7G of memory:

     free -h
               total       used       free     shared    buffers     cached
  Mem:          7.8G       6.5G       1.3G       2.2M       233M       612M
  -/+ buffers/cache:       5.7G       2.1G
  Swap:         2.0G         0B       2.0G

  
  However, I'm having trouble determining what exactly is eating all of that memory. Running top or htop doesn't seem to single anything out, and ps_mem.py (https://raw.github.com/pixelb/ps_mem/master/ps_mem.py) claims that I'm using much less than the system thinks...

      ...
   Private  +   Shared  =  RAM used	Program

  184.0 KiB +  29.5 KiB = 213.5 KiB	atd
  176.0 KiB +  48.5 KiB = 224.5 KiB	acpid
  164.0 KiB +  99.5 KiB = 263.5 KiB	anvil
  272.0 KiB +  52.0 KiB = 324.0 KiB	upstart-file-bridge
  288.0 KiB +  76.0 KiB = 364.0 KiB	cron
  312.0 KiB +  60.0 KiB = 372.0 KiB	irqbalance
  208.0 KiB + 188.0 KiB = 396.0 KiB	sh (2)
  328.0 KiB +  87.5 KiB = 415.5 KiB	upstart-udev-bridge
  312.0 KiB + 104.5 KiB = 416.5 KiB	log
  424.0 KiB +  53.5 KiB = 477.5 KiB	upstart-socket-bridge
  304.0 KiB + 213.5 KiB = 517.5 KiB	pickup
  336.0 KiB + 213.5 KiB = 549.5 KiB	qmgr
  396.0 KiB + 165.5 KiB = 561.5 KiB	dovecot
  360.0 KiB + 205.5 KiB = 565.5 KiB	master
  528.0 KiB +  52.5 KiB = 580.5 KiB	nrpe
  608.0 KiB + 148.5 KiB = 756.5 KiB	systemd-logind
  764.0 KiB +  61.5 KiB = 825.5 KiB	dbus-daemon
  772.0 KiB + 107.0 KiB = 879.0 KiB	top
  808.0 KiB +  87.5 KiB = 895.5 KiB	systemd-udevd
  940.0 KiB + 147.5 KiB =   1.1 MiB	ntpd
  956.0 KiB + 285.0 KiB =   1.2 MiB	getty (6)
    1.1 MiB + 134.0 KiB =   1.2 MiB	config
    1.6 MiB + 121.5 KiB =   1.7 MiB	init
    2.5 MiB +  22.0 KiB =   2.6 MiB	dhclient
    2.8 MiB + 476.5 KiB =   3.3 MiB	vmtoolsd
    4.2 MiB + 452.5 KiB =   4.6 MiB	whoopsie
    5.1 MiB +  96.5 KiB =   5.2 MiB	rsyslogd
    3.6 MiB +   2.3 MiB =   5.9 MiB	sshd (4)
    6.7 MiB +   1.0 MiB =   7.7 MiB	bash (3)
    8.3 MiB + 277.5 KiB =   8.6 MiB	redis-server (3)
   13.0 MiB +  26.5 KiB =  13.0 MiB	docker
  342.0 MiB +   6.9 MiB = 348.9 MiB	nodejs (8)
  ---------------------------------
                          414.3 MiB
  =================================

  
  This other formula for totaling the memory roughly agrees:

      # ps -e -orss=,args= | sort -b -k1,1n |   awk '{total = total + $1}END{print total}'
      503612

  If the processes only total 500 MiB, where's the rest of the memory
  going?

  Slabtop doesn't look like I have a huge cache or anything...

       Active / Total Objects (% used)    : 672886 / 681837 (98.7%)
   Active / Total Slabs (% used)      : 15441 / 15441 (100.0%)
   Active / Total Caches (% used)     : 70 / 101 (69.3%)
   Active / Total Size (% used)       : 179811.23K / 184282.05K (97.6%)
   Minimum / Average / Maximum Object : 0.01K / 0.27K / 8.00K

    OBJS ACTIVE  USE OBJ SIZE  SLABS OBJ/SLAB CACHE SIZE NAME                   
  171318 171318 100%    0.19K   4079       42     32632K dentry                 
  127257 127257 100%    0.10K   3263       39     13052K buffer_head            
   75669  75669 100%    0.96K   2293       33     73376K ext4_inode_cache       
   35328  34959  98%    0.06K    552       64      2208K kmalloc-64             
   33354  33354 100%    0.04K    327      102      1308K ext4_extent_status     
   25560  25560 100%    0.11K    710       36      2840K sysfs_dir_cache        
   18944  18944 100%    0.01K     37      512       148K kmalloc-8              
   18848  18848 100%    0.50K    589       32      9424K kmalloc-512            
   17680  17680 100%    0.05K    208       85       832K shared_policy_node     
   17248  17248 100%    0.12K    539       32      2156K au_dinfo               
   15390   9116  59%    0.55K    270       57      8640K radix_tree_node        
   15372  15372 100%    0.09K    366       42      1464K kmalloc-96             
   13398  13398 100%    0.75K    319       42     10208K au_icntnr              
   11424  11424 100%    0.57K    204       56      6528K inode_cache            
   11312  11312 100%    0.07K    202       56       808K Acpi-ParseExt          
   11072  11072 100%    0.06K    173       64       692K ext4_free_data         
    7650   7650 100%    0.04K     75      102       300K Acpi-Namespace         
    7168   7168 100%    0.02K     28      256       112K kmalloc-16             
    7014   6598  94%    0.19K    167       42      1336K kmalloc-192            
    5984   5878  98%    0.12K    187       32       748K kmalloc-128            
    5504   5196  94%    0.03K     43      128       172K kmalloc-32             
    3328   3328 100%    0.03K     26      128       104K jbd2_revoke_record_s   
    3008   3008 100%    0.06K     47       64       188K anon_vma               
    2912   2494  85%    0.25K     91       32       728K kmalloc-256            
    2850   2727  95%    0.63K     57       50      1824K proc_inode_cache       
    1792   1792 100%    0.07K     32       56       128K ext4_io_end            
    1248   1176  94%    1.00K     39       32      1248K kmalloc-1024           
    1152   1152 100%    0.66K     24       48       768K shmem_inode_cache      
    1044   1044 100%    0.11K     29       36       116K jbd2_journal_head      
    1040    780  75%    0.30K     20       52       320K nf_conntrack_ffffffff81cda040
     969    969 100%    0.62K     19       51       608K sock_inode_cache       
     884    624  70%    0.30K     17       52       272K nf_conntrack_ffff880036b7b000
     864    672  77%    0.25K     27       32       216K tw_sock_TCP            
     756    756 100%    0.88K     21       36       672K mm_struct              
     630    630 100%    1.06K     21       30       672K signal_cache           
     624    540  86%    2.00K     39       16      1248K kmalloc-2048           
     507    507 100%    0.81K     13       39       416K task_xstate            
     462    462 100%    0.38K     11       42       176K blkdev_requests        
     378    378 100%    0.19K      9       42        72K au_finfo               
     365    349  95%    5.98K     73        5      2336K task_struct            
     364    364 100%    0.30K      7       52       112K nf_conntrack_ffff880036503000
     360    360 100%    0.13K      6       60        48K ext4_allocation_context

  
  What other tests can I do to understand my memory usage? Thanks!

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/procps/+bug/1352718/+subscriptions