yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #78462
[Bug 1829062] [NEW] nova placement api non-responsive due to eventlet error
Public bug reported:
In starlingx setup, we're running a nova docker image based on nova stable/stein as of May 6.
We're seeing nova-compute processes stalling and not creating resource providers with placement.
openstack hypervisor list
+----+---------------------+-----------------+-----------------+-------+
| ID | Hypervisor Hostname | Hypervisor Type | Host IP | State |
+----+---------------------+-----------------+-----------------+-------+
| 5 | worker-1 | QEMU | 192.168.206.247 | down |
| 8 | worker-2 | QEMU | 192.168.206.211 | down |
+----+---------------------+-----------------+-----------------+-------+
Observe this error in nova-placement-api logs related to eventlet at same time:
2019-05-14 00:44:03.636229 Traceback (most recent call last):
2019-05-14 00:44:03.636276 File "/var/lib/openstack/lib/python2.7/site-packages/eventlet/hubs/hub.py", line 460, in fire_timers
2019-05-14 00:44:03.636536 timer()
2019-05-14 00:44:03.636560 File "/var/lib/openstack/lib/python2.7/site-packages/eventlet/hubs/timer.py", line 59, in _call_
2019-05-14 00:44:03.636647 cb(*args, **kw)
2019-05-14 00:44:03.636661 File "/var/lib/openstack/lib/python2.7/site-packages/eventlet/semaphore.py", line 147, in _do_acquire
2019-05-14 00:44:03.636774 waiter.switch()
2019-05-14 00:44:03.636792 error: cannot switch to a different thread
This is a new behaviour for us in stable/stein and suspect this is due to merge of eventlet related change on May 4:
https://github.com/openstack/nova/commit/6755034e109079fb5e8bbafcd611a919f0884d14
** Affects: nova
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1829062
Title:
nova placement api non-responsive due to eventlet error
Status in OpenStack Compute (nova):
New
Bug description:
In starlingx setup, we're running a nova docker image based on nova stable/stein as of May 6.
We're seeing nova-compute processes stalling and not creating resource providers with placement.
openstack hypervisor list
+----+---------------------+-----------------+-----------------+-------+
| ID | Hypervisor Hostname | Hypervisor Type | Host IP | State |
+----+---------------------+-----------------+-----------------+-------+
| 5 | worker-1 | QEMU | 192.168.206.247 | down |
| 8 | worker-2 | QEMU | 192.168.206.211 | down |
+----+---------------------+-----------------+-----------------+-------+
Observe this error in nova-placement-api logs related to eventlet at same time:
2019-05-14 00:44:03.636229 Traceback (most recent call last):
2019-05-14 00:44:03.636276 File "/var/lib/openstack/lib/python2.7/site-packages/eventlet/hubs/hub.py", line 460, in fire_timers
2019-05-14 00:44:03.636536 timer()
2019-05-14 00:44:03.636560 File "/var/lib/openstack/lib/python2.7/site-packages/eventlet/hubs/timer.py", line 59, in _call_
2019-05-14 00:44:03.636647 cb(*args, **kw)
2019-05-14 00:44:03.636661 File "/var/lib/openstack/lib/python2.7/site-packages/eventlet/semaphore.py", line 147, in _do_acquire
2019-05-14 00:44:03.636774 waiter.switch()
2019-05-14 00:44:03.636792 error: cannot switch to a different thread
This is a new behaviour for us in stable/stein and suspect this is due to merge of eventlet related change on May 4:
https://github.com/openstack/nova/commit/6755034e109079fb5e8bbafcd611a919f0884d14
To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1829062/+subscriptions
Follow ups