yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #72927
[Bug 1771517] [NEW] Quota update unexpected behavior with no access to keystone
Public bug reported:
Distro: OpenStack Queens running on Ubuntu 16.04
>From this commit [1] nova now needs access to keystone to perform quota
(this bug is mostly related to issue we had with quota update).
When keystone is not available the nova-api (running in eventlet) tries
to use the endpoints ordered in [keystone]/valid_interfaces, we did not
have access to the internal endpoint which caused this issue:
2018-05-14 15:54:46.134 1241 INFO nova.api.openstack.identity [req-
8b383cf0-7f99-41e6-9de3-5e694fb24449 f13940ac09924d8582fe6612e838c7a7
9387d3a7be2a487784a90660b6e182cb - default default] Unable to contact
keystone to verify project_id
You'll also see:
2018-05-14 15:54:46.419 1241 INFO nova.osapi_compute.wsgi.server [req-e9da4d33-05be-42fe-891d-0d201d2e8311 83e8a17bf7874682a86f9aa58f4c9507 e83ea76e472f48679f6fa6070a8a16e1 - default default] Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/eventlet/wsgi.py", line 512, in handle_one_response
write(b''.join(towrite))
File "/usr/lib/python2.7/dist-packages/eventlet/wsgi.py", line 453, in write
wfile.flush()
File "/usr/lib/python2.7/socket.py", line 307, in flush
self._sock.sendall(view[write_offset:write_offset+buffer_size])
File "/usr/lib/python2.7/dist-packages/eventlet/greenio/base.py", line 385, in sendall
tail = self.send(data, flags)
File "/usr/lib/python2.7/dist-packages/eventlet/greenio/base.py", line 379, in send
return self._send_loop(self.fd.send, data, flags)
File "/usr/lib/python2.7/dist-packages/eventlet/greenio/base.py", line 366, in _send_loop
return send_method(data, *args)
error: [Errno 104] Connection reset by peer
Now this is correct, however what happens next is imo not correct, it
generates a 200 OK response when it actually failed to perform the
requested action.
2018-05-14 15:54:46.420 1241 INFO nova.osapi_compute.wsgi.server [req-
e9da4d33-05be-42fe-891d-0d201d2e8311 83e8a17bf7874682a86f9aa58f4c9507
e83ea76e472f48679f6fa6070a8a16e1 - default default]
::ffff:195.74.38.54,172.20.104.11 "PUT
/v2/e83ea76e472f48679f6fa6070a8a16e1/os-quota-
sets/e83ea76e472f48679f6fa6070a8a16e1 HTTP/1.1" status: 200 len: 0 time:
128.0125880
For us we were able to notice this with 504 gateway error because the
time the request took (128 seconds) was too long for our load balancer
to allow.
I think atleast catching the exception and setting the return code to
500 would be appropriate, and also output it as an error and not a INFO
message.
[1]
https://github.com/openstack/nova/commit/1f120b5649ba03aa5b2490a82c08b77c580f12d7
** Affects: nova
Importance: Undecided
Status: New
** Description changed:
+ Distro: OpenStack Queens running on Ubuntu 16.04
+
From this commit [1] nova now needs access to keystone to perform quota
(this bug is mostly related to issue we had with quota update).
When keystone is not available the nova-api (running in eventlet) tries
to use the endpoints ordered in [keystone]/valid_interfaces, we did not
have access to the internal endpoint which caused this issue:
2018-05-14 15:54:46.134 1241 INFO nova.api.openstack.identity [req-
8b383cf0-7f99-41e6-9de3-5e694fb24449 f13940ac09924d8582fe6612e838c7a7
9387d3a7be2a487784a90660b6e182cb - default default] Unable to contact
keystone to verify project_id
You'll also see:
2018-05-14 15:54:46.419 1241 INFO nova.osapi_compute.wsgi.server [req-e9da4d33-05be-42fe-891d-0d201d2e8311 83e8a17bf7874682a86f9aa58f4c9507 e83ea76e472f48679f6fa6070a8a16e1 - default default] Traceback (most recent call last):
- File "/usr/lib/python2.7/dist-packages/eventlet/wsgi.py", line 512, in handle_one_response
- write(b''.join(towrite))
- File "/usr/lib/python2.7/dist-packages/eventlet/wsgi.py", line 453, in write
- wfile.flush()
- File "/usr/lib/python2.7/socket.py", line 307, in flush
- self._sock.sendall(view[write_offset:write_offset+buffer_size])
- File "/usr/lib/python2.7/dist-packages/eventlet/greenio/base.py", line 385, in sendall
- tail = self.send(data, flags)
- File "/usr/lib/python2.7/dist-packages/eventlet/greenio/base.py", line 379, in send
- return self._send_loop(self.fd.send, data, flags)
- File "/usr/lib/python2.7/dist-packages/eventlet/greenio/base.py", line 366, in _send_loop
- return send_method(data, *args)
+ File "/usr/lib/python2.7/dist-packages/eventlet/wsgi.py", line 512, in handle_one_response
+ write(b''.join(towrite))
+ File "/usr/lib/python2.7/dist-packages/eventlet/wsgi.py", line 453, in write
+ wfile.flush()
+ File "/usr/lib/python2.7/socket.py", line 307, in flush
+ self._sock.sendall(view[write_offset:write_offset+buffer_size])
+ File "/usr/lib/python2.7/dist-packages/eventlet/greenio/base.py", line 385, in sendall
+ tail = self.send(data, flags)
+ File "/usr/lib/python2.7/dist-packages/eventlet/greenio/base.py", line 379, in send
+ return self._send_loop(self.fd.send, data, flags)
+ File "/usr/lib/python2.7/dist-packages/eventlet/greenio/base.py", line 366, in _send_loop
+ return send_method(data, *args)
error: [Errno 104] Connection reset by peer
Now this is correct, however what happens next is imo not correct, it
generates a 200 OK response when it actually failed to perform the
requested action.
2018-05-14 15:54:46.420 1241 INFO nova.osapi_compute.wsgi.server [req-
e9da4d33-05be-42fe-891d-0d201d2e8311 83e8a17bf7874682a86f9aa58f4c9507
e83ea76e472f48679f6fa6070a8a16e1 - default default]
::ffff:195.74.38.54,172.20.104.11 "PUT
/v2/e83ea76e472f48679f6fa6070a8a16e1/os-quota-
sets/e83ea76e472f48679f6fa6070a8a16e1 HTTP/1.1" status: 200 len: 0 time:
128.0125880
For us we were able to notice this with 504 gateway error because the
time the request took (128 seconds) was too long for our load balancer
to allow.
I think atleast catching the exception and setting the return code to
500 would be appropriate, and also output it as an error and not a INFO
message.
[1]
https://github.com/openstack/nova/commit/1f120b5649ba03aa5b2490a82c08b77c580f12d7
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1771517
Title:
Quota update unexpected behavior with no access to keystone
Status in OpenStack Compute (nova):
New
Bug description:
Distro: OpenStack Queens running on Ubuntu 16.04
From this commit [1] nova now needs access to keystone to perform
quota (this bug is mostly related to issue we had with quota update).
When keystone is not available the nova-api (running in eventlet)
tries to use the endpoints ordered in [keystone]/valid_interfaces, we
did not have access to the internal endpoint which caused this issue:
2018-05-14 15:54:46.134 1241 INFO nova.api.openstack.identity [req-
8b383cf0-7f99-41e6-9de3-5e694fb24449 f13940ac09924d8582fe6612e838c7a7
9387d3a7be2a487784a90660b6e182cb - default default] Unable to contact
keystone to verify project_id
You'll also see:
2018-05-14 15:54:46.419 1241 INFO nova.osapi_compute.wsgi.server [req-e9da4d33-05be-42fe-891d-0d201d2e8311 83e8a17bf7874682a86f9aa58f4c9507 e83ea76e472f48679f6fa6070a8a16e1 - default default] Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/eventlet/wsgi.py", line 512, in handle_one_response
write(b''.join(towrite))
File "/usr/lib/python2.7/dist-packages/eventlet/wsgi.py", line 453, in write
wfile.flush()
File "/usr/lib/python2.7/socket.py", line 307, in flush
self._sock.sendall(view[write_offset:write_offset+buffer_size])
File "/usr/lib/python2.7/dist-packages/eventlet/greenio/base.py", line 385, in sendall
tail = self.send(data, flags)
File "/usr/lib/python2.7/dist-packages/eventlet/greenio/base.py", line 379, in send
return self._send_loop(self.fd.send, data, flags)
File "/usr/lib/python2.7/dist-packages/eventlet/greenio/base.py", line 366, in _send_loop
return send_method(data, *args)
error: [Errno 104] Connection reset by peer
Now this is correct, however what happens next is imo not correct, it
generates a 200 OK response when it actually failed to perform the
requested action.
2018-05-14 15:54:46.420 1241 INFO nova.osapi_compute.wsgi.server [req-
e9da4d33-05be-42fe-891d-0d201d2e8311 83e8a17bf7874682a86f9aa58f4c9507
e83ea76e472f48679f6fa6070a8a16e1 - default default]
::ffff:195.74.38.54,172.20.104.11 "PUT
/v2/e83ea76e472f48679f6fa6070a8a16e1/os-quota-
sets/e83ea76e472f48679f6fa6070a8a16e1 HTTP/1.1" status: 200 len: 0
time: 128.0125880
For us we were able to notice this with 504 gateway error because the
time the request took (128 seconds) was too long for our load balancer
to allow.
I think atleast catching the exception and setting the return code to
500 would be appropriate, and also output it as an error and not a
INFO message.
[1]
https://github.com/openstack/nova/commit/1f120b5649ba03aa5b2490a82c08b77c580f12d7
To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1771517/+subscriptions