yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #87017
[Bug 1936959] Re: [OVN Octavia Provider] Unable to delete Load Balancer with PENDING_DELETE
Reviewed: https://review.opendev.org/c/openstack/ovn-octavia-provider/+/801517
Committed: https://opendev.org/openstack/ovn-octavia-provider/commit/fab03e7c6d3f61afab2fb71ef70efa603f393de2
Submitter: "Zuul (22348)"
Branch: master
commit fab03e7c6d3f61afab2fb71ef70efa603f393de2
Author: Brian Haley <bhaley@xxxxxxxxxx>
Date: Tue Jul 20 12:56:50 2021 -0400
Fix race condition retrieving logical router rows
Using rows.values() via the ovsdbapp API is inherently
racy as if there are multiple threads an add/delete can
interfere, triggering a RuntimeError (dictionary changed
size during iteration).
In one case we now directly use lookup() as we were
looking for a specific logical router.
In the other two cases we create a new class to do the
operation in a transaction, making it idempotent, since
both need to iterate the returned list.
Also changed the IDL notify code to use a frozen row to
similary avoid any possible race condition with there.
Change-Id: Id4e15867b61925fa157c9e81750d8c5f63ad48a5
Closes-bug: #1936959
** Changed in: neutron
Status: In Progress => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1936959
Title:
[OVN Octavia Provider] Unable to delete Load Balancer with
PENDING_DELETE
Status in neutron:
Fix Released
Bug description:
While attempting to delete a Load Balancer the provisioning status is
moved to PENDING_DELETE and remains that way, blocking the deletion
process to finalize.
The following tracebacks were found on the logs regarding that
specific lb:
2021-07-17 13:49:26.131 19 INFO octavia.api.v2.controllers.load_balancer [req-b8b3cbd8-3014-4c45-9680-d4c67346ed1c - 1e38d4dfbfb7427787725df69fabc22b - default default] Sending delete Load Balancer 19d8e465-c704-40a9-b1fd-5b0824408e5d to provider ovn
2021-07-17 13:49:26.139 19 DEBUG ovn_octavia_provider.helper [-] Handling request lb_delete with info {'id': '19d8e465-c704-40a9-b1fd-5b0824408e5d', 'cascade': True} request_handler /usr/lib/python3.6/site-packages/ovn_octavia_provider/helper.py:303
2021-07-17 13:49:26.196 19 ERROR ovn_octavia_provider.helper [-] Exception occurred during deletion of loadbalancer: RuntimeError: dictionary changed size during iteration
2021-07-17 13:49:26.196 19 ERROR ovn_octavia_provider.helper Traceback (most recent call last):
2021-07-17 13:49:26.196 19 ERROR ovn_octavia_provider.helper File "/usr/lib/python3.6/site-packages/ovn_octavia_provider/helper.py", line 907, in lb_delete
2021-07-17 13:49:26.196 19 ERROR ovn_octavia_provider.helper status = self._lb_delete(loadbalancer, ovn_lb, status)
2021-07-17 13:49:26.196 19 ERROR ovn_octavia_provider.helper File "/usr/lib/python3.6/site-packages/ovn_octavia_provider/helper.py", line 960, in _lb_delete
2021-07-17 13:49:26.196 19 ERROR ovn_octavia_provider.helper for ls in self._find_lb_in_table(ovn_lb, 'Logical_Switch'):
2021-07-17 13:49:26.196 19 ERROR ovn_octavia_provider.helper File "/usr/lib/python3.6/site-packages/ovn_octavia_provider/helper.py", line 289, in _find_lb_in_table
2021-07-17 13:49:26.196 19 ERROR ovn_octavia_provider.helper return [item for item in self.ovn_nbdb_api.tables[table].rows.values()
2021-07-17 13:49:26.196 19 ERROR ovn_octavia_provider.helper File "/usr/lib/python3.6/site-packages/ovn_octavia_provider/helper.py", line 289, in <listcomp>
2021-07-17 13:49:26.196 19 ERROR ovn_octavia_provider.helper return [item for item in self.ovn_nbdb_api.tables[table].rows.values()
2021-07-17 13:49:26.196 19 ERROR ovn_octavia_provider.helper File "/usr/lib64/python3.6/_collections_abc.py", line 761, in __iter__
2021-07-17 13:49:26.196 19 ERROR ovn_octavia_provider.helper for key in self._mapping:
2021-07-17 13:49:26.196 19 ERROR ovn_octavia_provider.helper RuntimeError: dictionary changed size during iteration
2021-07-17 13:49:26.196 19 ERROR ovn_octavia_provider.helper
2021-07-17 13:49:26.446 13 DEBUG octavia.common.keystone [req-267feb7e-2235-43d9-bec8-88ff532b9019 - 1e38d4dfbfb7427787725df69fabc22b - default default] Request path is / and it does not require keystone authentication process_request /usr/lib/python3.6/site-packages/octavia/common/keystone.py:77
2021-07-17 13:49:26.554 19 DEBUG ovn_octavia_provider.helper [-] Updating status to octavia: {'loadbalancers': [{'id': '19d8e465-c704-40a9-b1fd-5b0824408e5d', 'provisioning_status': 'ERROR', 'operating_status': 'ERROR'}], 'listeners': [{'id': '0806594a-4ed7-4889-81fa-6fd8d02b0d80', 'provisioning_status': 'DELETED', 'operating_status': 'OFFLINE'}], 'pools': [{'id': 'b8a98db0-6d2e-4745-b533-d2eb3548d1b9', 'provisioning_status': 'DELETED'}], 'members': [{'id': '08464181-728b-425a-b690-d3eb656f7e0a', 'provisioning_status': 'DELETED'}]} _update_status_to_octavia /usr/lib/python3.6/site-packages/ovn_octavia_provider/helper.py:32
The problem here is that using rows.values() is inherently racy as if
there are multiple threads running this can happen eventually.
To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1936959/+subscriptions
References