yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #19607
[Bug 1362347] [NEW] gneutron-ns-meta invokes oom-killer during gate runs
Public bug reported:
Occasionally a neutron gate job fails because the node runs out of
memory. oom-killer is invoked and it starts killing processes to save
the node. (which just causes cascading issues) The kernel logs show that
oom-killer is being invoked by neutron-ns-meta.
An example of one such failure is:
http://logs.openstack.org/75/116775/2/check/check-tempest-dsvm-neutron-
full/ab17a70/
With the kernel log:
http://logs.openstack.org/75/116775/2/check/check-tempest-dsvm-neutron-
full/ab17a70/logs/syslog.txt.gz#_Aug_26_04_59_03
Using logstash this failure can be isolated to only neutron gate jobs.
So there is probably something triggering neutron to occasionally make
the job consume in excess of 8GB of ram.
I also noted in the neutron svc log that first out of memory error came
from using keystone-middleware:
http://logs.openstack.org/75/116775/2/check/check-tempest-dsvm-neutron-
full/ab17a70/logs/screen-q-svc.txt.gz#_2014-08-26_04_56_39_602
but that may just be a red herring.
** Affects: neutron
Importance: Undecided
Status: New
** Tags: gate-failure
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1362347
Title:
gneutron-ns-meta invokes oom-killer during gate runs
Status in OpenStack Neutron (virtual network service):
New
Bug description:
Occasionally a neutron gate job fails because the node runs out of
memory. oom-killer is invoked and it starts killing processes to save
the node. (which just causes cascading issues) The kernel logs show
that oom-killer is being invoked by neutron-ns-meta.
An example of one such failure is:
http://logs.openstack.org/75/116775/2/check/check-tempest-dsvm-
neutron-full/ab17a70/
With the kernel log:
http://logs.openstack.org/75/116775/2/check/check-tempest-dsvm-
neutron-full/ab17a70/logs/syslog.txt.gz#_Aug_26_04_59_03
Using logstash this failure can be isolated to only neutron gate jobs.
So there is probably something triggering neutron to occasionally make
the job consume in excess of 8GB of ram.
I also noted in the neutron svc log that first out of memory error
came from using keystone-middleware:
http://logs.openstack.org/75/116775/2/check/check-tempest-dsvm-
neutron-full/ab17a70/logs/screen-q-svc.txt.gz#_2014-08-26_04_56_39_602
but that may just be a red herring.
To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1362347/+subscriptions
Follow ups
References