yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #26349
[Bug 1404268] [NEW] Missing nova context during spawn
Public bug reported:
The nova request context tracks a security context and other request
information, including a request id that is added to log entries
associated with this request. The request context is passed around
explicitly in many chunks of OpenStack code. But nova/context.py also
stores the RequestContext in the thread's local store (when the
RequestContext is created, or when it is explicitly stored through a
call to update_store). The nova logger will use an explicitly passed
context, or look for it in the local.store.
A recent change in community openstack code has resulted in the context
not being set for many nova log messages during spawn:
https://bugs.launchpad.net/neutron/+bug/1372049
This change spawns a new thread in nova/compute/manager.py
build_and_run_instance, and the spawn runs in that new thread. When the
original RPC thread created the nova RequestContext, the context was set
in the thread's local store. But the context does not get set in the
newly-spawned thread.
Example of log messages with missing req id during spawn:
014-12-13 22:20:30.987 18219 DEBUG nova.openstack.common.lockutils [-] Acquired semaphore "87c7fc32-042e-40b7-af46-44bff50fa1b4" lock /usr/lib/python2.6/site-packages/nova/openstack/common/lockutils.py:229
2014-12-13 22:20:30.987 18219 DEBUG nova.openstack.common.lockutils [-] Got semaphore / lock "_locked_do_build_and_run_instance" inner /usr/lib/python2.6/site-packages/nova/openstack/common/lockutils.py:271
2014-12-13 22:20:31.012 18219 AUDIT nova.compute.manager [req-bd959d69-86de-4eea-ae1d-a066843ca317 None] [instance: 87c7fc32-042e-40b7-af46-44bff50fa1b4] Starting instance...
...
2014-12-13 22:20:31.280 18219 DEBUG nova.openstack.common.lockutils [-] Created new semaphore "compute_resources" internal_lock /usr/lib/python2.6/site-packages/nova/openstack/common/lockutils.py:206
2014-12-13 22:20:31.281 18219 DEBUG nova.openstack.common.lockutils [-] Acquired semaphore "compute_resources" lock /usr/lib/python2.6/site-packages/nova/openstack/common/lockutils.py:229
2014-12-13 22:20:31.282 18219 DEBUG nova.openstack.common.lockutils [-] Got semaphore / lock "instance_claim" inner /usr/lib/python2.6/site-packages/nova/openstack/common/lockutils.py:271
2014-12-13 22:20:31.284 18219 DEBUG nova.compute.resource_tracker [-] Memory overhead for 512 MB instance; 0 MB instance_claim /usr/lib/python2.6/site-packages/nova/compute/resource_tracker.py:1272014-12-13 22:20:31.290 18219 AUDIT nova.compute.claims [-] [instance: 87c7fc32-042e-40b7-af46-44bff50fa1b4] Attempting claim: memory 512 MB, disk 10 GB2014-12-13 22:20:31.292 18219 AUDIT nova.compute.claims [-] [instance: 87c7fc32-042e-40b7-af46-44bff50fa1b4] Total memory: 131072 MB, used: 12288.00 MB2014-12-13 22:20:31.296 18219 AUDIT nova.compute.claims [-] [instance: 87c7fc32-042e-40b7-af46-44bff50fa1b4] memory limit not specified, defaulting to unlimited2014-12-13 22:20:31.300 18219 AUDIT nova.compute.claims [-] [instance: 87c7fc32-042e-40b7-af46-44bff50fa1b4] Total disk: 2097152 GB, used: 60.00 GB2014-12-13 22:20:31.304 18219 AUDIT nova.compute.claims [-] [instance: 87c7fc32-042e-40b7-af46-44bff50fa1b4] disk limit not specified, defaulting to unlimited
...
2014-12-13 22:20:32.850 18219 DEBUG nova.network.neutronv2.api [-]
[instance: 87c7fc32-042e-40b7-af46-44bff50fa1b4] get_instance_nw_info()
_get_instance_nw_info /usr/lib/python2.6/site-
packages/nova/network/neutronv2/api.py:611
Proposed patch:
one new line of code at the beginning of nova/compute/manager.py
_do_build_and_run_instance:
context.update_store()
** Affects: nova
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1404268
Title:
Missing nova context during spawn
Status in OpenStack Compute (Nova):
New
Bug description:
The nova request context tracks a security context and other request
information, including a request id that is added to log entries
associated with this request. The request context is passed around
explicitly in many chunks of OpenStack code. But nova/context.py also
stores the RequestContext in the thread's local store (when the
RequestContext is created, or when it is explicitly stored through a
call to update_store). The nova logger will use an explicitly passed
context, or look for it in the local.store.
A recent change in community openstack code has resulted in the
context not being set for many nova log messages during spawn:
https://bugs.launchpad.net/neutron/+bug/1372049
This change spawns a new thread in nova/compute/manager.py
build_and_run_instance, and the spawn runs in that new thread. When
the original RPC thread created the nova RequestContext, the context
was set in the thread's local store. But the context does not get set
in the newly-spawned thread.
Example of log messages with missing req id during spawn:
014-12-13 22:20:30.987 18219 DEBUG nova.openstack.common.lockutils [-] Acquired semaphore "87c7fc32-042e-40b7-af46-44bff50fa1b4" lock /usr/lib/python2.6/site-packages/nova/openstack/common/lockutils.py:229
2014-12-13 22:20:30.987 18219 DEBUG nova.openstack.common.lockutils [-] Got semaphore / lock "_locked_do_build_and_run_instance" inner /usr/lib/python2.6/site-packages/nova/openstack/common/lockutils.py:271
2014-12-13 22:20:31.012 18219 AUDIT nova.compute.manager [req-bd959d69-86de-4eea-ae1d-a066843ca317 None] [instance: 87c7fc32-042e-40b7-af46-44bff50fa1b4] Starting instance...
...
2014-12-13 22:20:31.280 18219 DEBUG nova.openstack.common.lockutils [-] Created new semaphore "compute_resources" internal_lock /usr/lib/python2.6/site-packages/nova/openstack/common/lockutils.py:206
2014-12-13 22:20:31.281 18219 DEBUG nova.openstack.common.lockutils [-] Acquired semaphore "compute_resources" lock /usr/lib/python2.6/site-packages/nova/openstack/common/lockutils.py:229
2014-12-13 22:20:31.282 18219 DEBUG nova.openstack.common.lockutils [-] Got semaphore / lock "instance_claim" inner /usr/lib/python2.6/site-packages/nova/openstack/common/lockutils.py:271
2014-12-13 22:20:31.284 18219 DEBUG nova.compute.resource_tracker [-] Memory overhead for 512 MB instance; 0 MB instance_claim /usr/lib/python2.6/site-packages/nova/compute/resource_tracker.py:1272014-12-13 22:20:31.290 18219 AUDIT nova.compute.claims [-] [instance: 87c7fc32-042e-40b7-af46-44bff50fa1b4] Attempting claim: memory 512 MB, disk 10 GB2014-12-13 22:20:31.292 18219 AUDIT nova.compute.claims [-] [instance: 87c7fc32-042e-40b7-af46-44bff50fa1b4] Total memory: 131072 MB, used: 12288.00 MB2014-12-13 22:20:31.296 18219 AUDIT nova.compute.claims [-] [instance: 87c7fc32-042e-40b7-af46-44bff50fa1b4] memory limit not specified, defaulting to unlimited2014-12-13 22:20:31.300 18219 AUDIT nova.compute.claims [-] [instance: 87c7fc32-042e-40b7-af46-44bff50fa1b4] Total disk: 2097152 GB, used: 60.00 GB2014-12-13 22:20:31.304 18219 AUDIT nova.compute.claims [-] [instance: 87c7fc32-042e-40b7-af46-44bff50fa1b4] disk limit not specified, defaulting to unlimited
...
2014-12-13 22:20:32.850 18219 DEBUG nova.network.neutronv2.api [-]
[instance: 87c7fc32-042e-40b7-af46-44bff50fa1b4]
get_instance_nw_info() _get_instance_nw_info /usr/lib/python2.6/site-
packages/nova/network/neutronv2/api.py:611
Proposed patch:
one new line of code at the beginning of nova/compute/manager.py
_do_build_and_run_instance:
context.update_store()
To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1404268/+subscriptions
Follow ups
References