yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #28368
[Bug 1412961] Re: storing multiple heat snapshots causing significant memory consumption
** Project changed: heat => glance
** Summary changed:
- storing multiple heat snapshots causing significant memory consumption
+ storing multiple snapshots causing significant memory consumption
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1412961
Title:
storing multiple snapshots causing significant memory consumption
Status in OpenStack Image Registry and Delivery Service (Glance):
New
Bug description:
Running a randomized snapshot + restore workload on a single heat
stack (single vm) with 2 concurrent users causes significant memory
consumption that continues even after all tests are completed.
KiB Mem: 37066812 total, 6883732 used, 30183080 free, 244800 buffers
KiB Mem: 37066812 total, 35717836 used, 1348976 free, 29860 buffers <- post workload
Tests:
clone https://github.com/pcrews/rannsaka
cd rannsaka/rannsaka
python rannsaka.py --host=http://192.168.0.5 --requests 500 -w 2 --test-file=locust_files/heat_basic_stress.py
# PRE STRESS TEST
top - 12:01:42 up 1:04, 2 users, load average: 0.87, 0.87, 0.58
Tasks: 368 total, 2 running, 366 sleeping, 0 stopped, 0 zombie
%Cpu(s): 1.3 us, 0.3 sy, 0.3 ni, 96.5 id, 1.6 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem: 37066812 total, 6883732 used, 30183080 free, 244800 buffers
KiB Swap: 37738492 total, 0 used, 37738492 free. 2516732 cached Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
14687 stack 20 0 221808 8092 4596 R 99.3 0.0 0:05.49 qemu-img
2208 rabbitmq 20 0 2349296 66804 2516 S 6.2 0.2 0:28.53 beam.smp
12106 stack 20 0 173852 73228 5084 S 6.2 0.2 0:03.91 glance-api
14705 stack 20 0 25212 1728 1088 R 6.2 0.0 0:00.01 top
# POST STRESS TEST
top - 12:46:17 up 1:49, 3 users, load average: 0.34, 0.53, 1.39
Tasks: 376 total, 2 running, 374 sleeping, 0 stopped, 0 zombie
%Cpu(s): 0.8 us, 0.2 sy, 0.0 ni, 98.5 id, 0.5 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem: 37066812 total, 35717836 used, 1348976 free, 29860 buffers
KiB Swap: 37738492 total, 251000 used, 37487492 free. 30435868 cached Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
21654 libvirt+ 20 0 4812112 491872 9416 S 0.3 1.3 0:15.80 qemu-system-x86
21402 libvirt+ 20 0 4743248 481652 9392 S 0.3 1.3 0:36.78 qemu-system-x86
12281 stack 20 0 340540 145080 3732 S 0.0 0.4 0:12.59 nova-api
12282 stack 20 0 340524 144968 3760 S 0.0 0.4 0:12.53 nova-api
12280 stack 20 0 339660 143952 3736 S 0.0 0.4 0:12.08 nova-api
12279 stack 20 0 337568 141988 3732 S 0.0 0.4 0:13.15 nova-api
9784 mysql 20 0 4099628 121912 7984 S 0.3 0.3 0:34.01 mysqld
12423 stack 20 0 276484 89216 3608 S 0.0 0.2 0:12.66 nova-conductor
12428 stack 20 0 276584 89184 3604 S 0.3 0.2 0:12.86 nova-conductor
12424 stack 20 0 276372 89148 3608 S 0.0 0.2 0:12.27 nova-conductor
12422 stack 20 0 276112 88764 3604 S 0.7 0.2 0:12.51 nova-conductor
12425 stack 20 0 275296 87968 3604 S 0.3 0.2 0:12.79 nova-conductor
12429 stack 20 0 275232 87776 3600 S 0.3 0.2 0:12.76 nova-conductor
12262 stack 20 0 190240 87700 5472 S 0.7 0.2 0:29.42 nova-api
12426 stack 20 0 273992 86596 3608 S 0.0 0.2 0:11.64 nova-conductor
12427 stack 20 0 272976 85724 3592 S 0.3 0.2 0:12.00 nova-conductor
12289 stack 20 0 190240 83728 1568 S 0.0 0.2 0:00.01 nova-api
12290 stack 20 0 190240 83728 1568 S 0.0 0.2 0:00.00 nova-api
12291 stack 20 0 190240 83728 1568 S 0.0 0.2 0:00.01 nova-api
12292 stack 20 0 190240 83728 1568 S 0.0 0.2 0:00.01 nova-api
To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1412961/+subscriptions