← Back to team overview

yahoo-eng-team team mailing list archive

[Bug 1681973] [NEW] neutron-ns-metadata-proxy wsgi settings not tunable

 

Public bug reported:


Based on commit https://github.com/openstack/neutron/commit/9d573387f1e33ce85269d3ed9be501717eed4807, the wsgi connection pool has been lowered to 100 threads, which is now a problem for the neutron-ns-metadata-proxy spawned by the neutron-metadata-agent and not passing through any wsgi parameters via the command line.

Originally I ran into this issue where a customer was using chef on his guest instances and calling the metadata quite heavily (ohai plugin) and this was leading to a socket bottleneck. The ns-metadata-proxy can only open 100 sockets, per namespace, to the metadata agent and then all further TCP connections (all the way to the configured backlog limit) are getting delayed/back logged. This leads to further timeouts in the clients using metadata, exaggerating the problem even more.
Once I manually started the ns-metadata-proxy, increasing the wsgi threads, all application issues disappeared. This particular problem can be experienced the more networks are attached to a neutron router.

Knowing that master and Ocata is using a new nginx based implementation
now, can this issue be even solved (although I assume the actual fix
will be quite small) ?

** Affects: neutron
     Importance: Undecided
         Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1681973

Title:
  neutron-ns-metadata-proxy wsgi settings not tunable

Status in neutron:
  New

Bug description:
  
  Based on commit https://github.com/openstack/neutron/commit/9d573387f1e33ce85269d3ed9be501717eed4807, the wsgi connection pool has been lowered to 100 threads, which is now a problem for the neutron-ns-metadata-proxy spawned by the neutron-metadata-agent and not passing through any wsgi parameters via the command line.

  Originally I ran into this issue where a customer was using chef on his guest instances and calling the metadata quite heavily (ohai plugin) and this was leading to a socket bottleneck. The ns-metadata-proxy can only open 100 sockets, per namespace, to the metadata agent and then all further TCP connections (all the way to the configured backlog limit) are getting delayed/back logged. This leads to further timeouts in the clients using metadata, exaggerating the problem even more.
  Once I manually started the ns-metadata-proxy, increasing the wsgi threads, all application issues disappeared. This particular problem can be experienced the more networks are attached to a neutron router.

  Knowing that master and Ocata is using a new nginx based
  implementation now, can this issue be even solved (although I assume
  the actual fix will be quite small) ?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1681973/+subscriptions


Follow ups