yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #72932
[Bug 1771577] [NEW] InventoryInUse exceptions with ironic virt driver
Public bug reported:
Error can be seen here -- http://logs.openstack.org/39/554439/20/check
/ironic-tempest-dsvm-ipa-wholedisk-bios-agent_ipmitool-
tinyipa/e3c2ae3/logs/screen-n-cpu.txt.gz#_May_15_14_51_20_013490 :
May 15 14:51:20.013490 ubuntu-xenial-inap-mtl01-0004032002 nova-compute[23854]: ERROR nova.compute.manager [None req-16f0955f-21d4-40b5-9f4f-14f6e2347ad5 service nova] Error updating resources for node 6b051502-a72e-4d72-a48a-23eef70f708f.: InventoryInUse: Inventory for ''CUSTOM_BAREMETAL'' on resource provider '2ec955b6-7aa2-4838-9dd6-fd9b279bff1e' in use.
May 15 14:51:20.013788 ubuntu-xenial-inap-mtl01-0004032002 nova-compute[23854]: ERROR nova.compute.manager Traceback (most recent call last):
May 15 14:51:20.014078 ubuntu-xenial-inap-mtl01-0004032002 nova-compute[23854]: ERROR nova.compute.manager File "/opt/stack/new/nova/nova/compute/manager.py", line 7343, in update_available_resource_for_node
May 15 14:51:20.014359 ubuntu-xenial-inap-mtl01-0004032002 nova-compute[23854]: ERROR nova.compute.manager rt.update_available_resource(context, nodename)
May 15 14:51:20.014679 ubuntu-xenial-inap-mtl01-0004032002 nova-compute[23854]: ERROR nova.compute.manager File "/opt/stack/new/nova/nova/compute/resource_tracker.py", line 680, in update_available_resource
May 15 14:51:20.014966 ubuntu-xenial-inap-mtl01-0004032002 nova-compute[23854]: ERROR nova.compute.manager self._update_available_resource(context, resources)
May 15 14:51:20.015242 ubuntu-xenial-inap-mtl01-0004032002 nova-compute[23854]: ERROR nova.compute.manager File "/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 274, in inner
May 15 14:51:20.015547 ubuntu-xenial-inap-mtl01-0004032002 nova-compute[23854]: ERROR nova.compute.manager return f(*args, **kwargs)
May 15 14:51:20.015846 ubuntu-xenial-inap-mtl01-0004032002 nova-compute[23854]: ERROR nova.compute.manager File "/opt/stack/new/nova/nova/compute/resource_tracker.py", line 704, in _update_available_resource
May 15 14:51:20.016135 ubuntu-xenial-inap-mtl01-0004032002 nova-compute[23854]: ERROR nova.compute.manager self._init_compute_node(context, resources)
May 15 14:51:20.016413 ubuntu-xenial-inap-mtl01-0004032002 nova-compute[23854]: ERROR nova.compute.manager File "/opt/stack/new/nova/nova/compute/resource_tracker.py", line 561, in _init_compute_node
May 15 14:51:20.016692 ubuntu-xenial-inap-mtl01-0004032002 nova-compute[23854]: ERROR nova.compute.manager self._update(context, cn)
May 15 14:51:20.016878 ubuntu-xenial-inap-mtl01-0004032002 nova-compute[23854]: ERROR nova.compute.manager File "/opt/stack/new/nova/nova/compute/resource_tracker.py", line 896, in _update
May 15 14:51:20.017048 ubuntu-xenial-inap-mtl01-0004032002 nova-compute[23854]: ERROR nova.compute.manager inv_data,
May 15 14:51:20.017226 ubuntu-xenial-inap-mtl01-0004032002 nova-compute[23854]: ERROR nova.compute.manager File "/opt/stack/new/nova/nova/scheduler/client/__init__.py", line 68, in set_inventory_for_provider
May 15 14:51:20.017425 ubuntu-xenial-inap-mtl01-0004032002 nova-compute[23854]: ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid,
May 15 14:51:20.017651 ubuntu-xenial-inap-mtl01-0004032002 nova-compute[23854]: ERROR nova.compute.manager File "/opt/stack/new/nova/nova/scheduler/client/__init__.py", line 37, in __run_method
May 15 14:51:20.017924 ubuntu-xenial-inap-mtl01-0004032002 nova-compute[23854]: ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs)
May 15 14:51:20.018196 ubuntu-xenial-inap-mtl01-0004032002 nova-compute[23854]: ERROR nova.compute.manager File "/opt/stack/new/nova/nova/scheduler/client/report.py", line 1056, in set_inventory_for_provider
May 15 14:51:20.018475 ubuntu-xenial-inap-mtl01-0004032002 nova-compute[23854]: ERROR nova.compute.manager self._update_inventory(context, rp_uuid, inv_data)
May 15 14:51:20.018701 ubuntu-xenial-inap-mtl01-0004032002 nova-compute[23854]: ERROR nova.compute.manager File "/opt/stack/new/nova/nova/scheduler/client/report.py", line 66, in wrapper
May 15 14:51:20.018875 ubuntu-xenial-inap-mtl01-0004032002 nova-compute[23854]: ERROR nova.compute.manager return f(self, *a, **k)
May 15 14:51:20.019092 ubuntu-xenial-inap-mtl01-0004032002 nova-compute[23854]: ERROR nova.compute.manager File "/opt/stack/new/nova/nova/scheduler/client/report.py", line 986, in _update_inventory
May 15 14:51:20.019270 ubuntu-xenial-inap-mtl01-0004032002 nova-compute[23854]: ERROR nova.compute.manager if self._update_inventory_attempt(context, rp_uuid, inv_data):
May 15 14:51:20.019460 ubuntu-xenial-inap-mtl01-0004032002 nova-compute[23854]: ERROR nova.compute.manager File "/opt/stack/new/nova/nova/scheduler/client/report.py", line 928, in _update_inventory_attempt
May 15 14:51:20.019621 ubuntu-xenial-inap-mtl01-0004032002 nova-compute[23854]: ERROR nova.compute.manager resource_provider=rp_uuid,
May 15 14:51:20.019805 ubuntu-xenial-inap-mtl01-0004032002 nova-compute[23854]: ERROR nova.compute.manager InventoryInUse: Inventory for ''CUSTOM_BAREMETAL'' on resource provider '2ec955b6-7aa2-4838-9dd6-fd9b279bff1e' in use.
May 15 14:51:20.020018 ubuntu-xenial-inap-mtl01-0004032002 nova-compute[23854]: ERROR nova.compute.manager
I think this is happening as we have the
compute.manager._update_resource_tracker call before the instance
allocation deletion in the compute.manager._complete_deletion, so it
introduces two problems:
1. if this _update_resource_tracker has out of date view from ironic virt driver node cache (as update_available_resource periodic happened before instance tear down started in ironic), we will delete the allocation without deleting the inventory, making the node resources 'free' until next update_available_resource periodic run
2. if _update_resource_tracker has up-to-date view from node cache, we will try to delete the inventory before deleting the allocation, which is what seems to be happening here.
** Affects: nova
Importance: Undecided
Assignee: Vladyslav Drok (vdrok)
Status: New
** Changed in: nova
Assignee: (unassigned) => Vladyslav Drok (vdrok)
** Description changed:
Error can be seen here -- http://logs.openstack.org/39/554439/20/check
/ironic-tempest-dsvm-ipa-wholedisk-bios-agent_ipmitool-
tinyipa/e3c2ae3/logs/screen-n-cpu.txt.gz#_May_15_14_51_20_013490 :
May 15 14:51:20.013490 ubuntu-xenial-inap-mtl01-0004032002 nova-compute[23854]: ERROR nova.compute.manager [None req-16f0955f-21d4-40b5-9f4f-14f6e2347ad5 service nova] Error updating resources for node 6b051502-a72e-4d72-a48a-23eef70f708f.: InventoryInUse: Inventory for ''CUSTOM_BAREMETAL'' on resource provider '2ec955b6-7aa2-4838-9dd6-fd9b279bff1e' in use.
May 15 14:51:20.013788 ubuntu-xenial-inap-mtl01-0004032002 nova-compute[23854]: ERROR nova.compute.manager Traceback (most recent call last):
May 15 14:51:20.014078 ubuntu-xenial-inap-mtl01-0004032002 nova-compute[23854]: ERROR nova.compute.manager File "/opt/stack/new/nova/nova/compute/manager.py", line 7343, in update_available_resource_for_node
May 15 14:51:20.014359 ubuntu-xenial-inap-mtl01-0004032002 nova-compute[23854]: ERROR nova.compute.manager rt.update_available_resource(context, nodename)
May 15 14:51:20.014679 ubuntu-xenial-inap-mtl01-0004032002 nova-compute[23854]: ERROR nova.compute.manager File "/opt/stack/new/nova/nova/compute/resource_tracker.py", line 680, in update_available_resource
May 15 14:51:20.014966 ubuntu-xenial-inap-mtl01-0004032002 nova-compute[23854]: ERROR nova.compute.manager self._update_available_resource(context, resources)
May 15 14:51:20.015242 ubuntu-xenial-inap-mtl01-0004032002 nova-compute[23854]: ERROR nova.compute.manager File "/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 274, in inner
May 15 14:51:20.015547 ubuntu-xenial-inap-mtl01-0004032002 nova-compute[23854]: ERROR nova.compute.manager return f(*args, **kwargs)
May 15 14:51:20.015846 ubuntu-xenial-inap-mtl01-0004032002 nova-compute[23854]: ERROR nova.compute.manager File "/opt/stack/new/nova/nova/compute/resource_tracker.py", line 704, in _update_available_resource
May 15 14:51:20.016135 ubuntu-xenial-inap-mtl01-0004032002 nova-compute[23854]: ERROR nova.compute.manager self._init_compute_node(context, resources)
May 15 14:51:20.016413 ubuntu-xenial-inap-mtl01-0004032002 nova-compute[23854]: ERROR nova.compute.manager File "/opt/stack/new/nova/nova/compute/resource_tracker.py", line 561, in _init_compute_node
May 15 14:51:20.016692 ubuntu-xenial-inap-mtl01-0004032002 nova-compute[23854]: ERROR nova.compute.manager self._update(context, cn)
May 15 14:51:20.016878 ubuntu-xenial-inap-mtl01-0004032002 nova-compute[23854]: ERROR nova.compute.manager File "/opt/stack/new/nova/nova/compute/resource_tracker.py", line 896, in _update
May 15 14:51:20.017048 ubuntu-xenial-inap-mtl01-0004032002 nova-compute[23854]: ERROR nova.compute.manager inv_data,
May 15 14:51:20.017226 ubuntu-xenial-inap-mtl01-0004032002 nova-compute[23854]: ERROR nova.compute.manager File "/opt/stack/new/nova/nova/scheduler/client/__init__.py", line 68, in set_inventory_for_provider
May 15 14:51:20.017425 ubuntu-xenial-inap-mtl01-0004032002 nova-compute[23854]: ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid,
May 15 14:51:20.017651 ubuntu-xenial-inap-mtl01-0004032002 nova-compute[23854]: ERROR nova.compute.manager File "/opt/stack/new/nova/nova/scheduler/client/__init__.py", line 37, in __run_method
May 15 14:51:20.017924 ubuntu-xenial-inap-mtl01-0004032002 nova-compute[23854]: ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs)
May 15 14:51:20.018196 ubuntu-xenial-inap-mtl01-0004032002 nova-compute[23854]: ERROR nova.compute.manager File "/opt/stack/new/nova/nova/scheduler/client/report.py", line 1056, in set_inventory_for_provider
May 15 14:51:20.018475 ubuntu-xenial-inap-mtl01-0004032002 nova-compute[23854]: ERROR nova.compute.manager self._update_inventory(context, rp_uuid, inv_data)
May 15 14:51:20.018701 ubuntu-xenial-inap-mtl01-0004032002 nova-compute[23854]: ERROR nova.compute.manager File "/opt/stack/new/nova/nova/scheduler/client/report.py", line 66, in wrapper
May 15 14:51:20.018875 ubuntu-xenial-inap-mtl01-0004032002 nova-compute[23854]: ERROR nova.compute.manager return f(self, *a, **k)
May 15 14:51:20.019092 ubuntu-xenial-inap-mtl01-0004032002 nova-compute[23854]: ERROR nova.compute.manager File "/opt/stack/new/nova/nova/scheduler/client/report.py", line 986, in _update_inventory
May 15 14:51:20.019270 ubuntu-xenial-inap-mtl01-0004032002 nova-compute[23854]: ERROR nova.compute.manager if self._update_inventory_attempt(context, rp_uuid, inv_data):
May 15 14:51:20.019460 ubuntu-xenial-inap-mtl01-0004032002 nova-compute[23854]: ERROR nova.compute.manager File "/opt/stack/new/nova/nova/scheduler/client/report.py", line 928, in _update_inventory_attempt
May 15 14:51:20.019621 ubuntu-xenial-inap-mtl01-0004032002 nova-compute[23854]: ERROR nova.compute.manager resource_provider=rp_uuid,
May 15 14:51:20.019805 ubuntu-xenial-inap-mtl01-0004032002 nova-compute[23854]: ERROR nova.compute.manager InventoryInUse: Inventory for ''CUSTOM_BAREMETAL'' on resource provider '2ec955b6-7aa2-4838-9dd6-fd9b279bff1e' in use.
- May 15 14:51:20.020018 ubuntu-xenial-inap-mtl01-0004032002 nova-compute[23854]: ERROR nova.compute.manager
+ May 15 14:51:20.020018 ubuntu-xenial-inap-mtl01-0004032002 nova-compute[23854]: ERROR nova.compute.manager
I think this is happening as we have the
compute.manager._update_resource_tracker call before the instance
allocation deletion in the compute.manager._complete_deletion, so it
introduces two problems:
- 1. if this _update_resource_tracker has out of date information (as update_available_resource periodic happened before instance tear down started in ironic), we will delete the allocation without deleting the inventory, making the node resources 'free' until next update_available_resource periodic run
- 2. if _update_resource_tracker has up-to-date information, we will try to delete the inventory before deleting the allocation, which is what seems to be happening here.
+ 1. if this _update_resource_tracker has out of date view from ironic virt driver node cache (as update_available_resource periodic happened before instance tear down started in ironic), we will delete the allocation without deleting the inventory, making the node resources 'free' until next update_available_resource periodic run
+ 2. if _update_resource_tracker has up-to-date view from node cache, we will try to delete the inventory before deleting the allocation, which is what seems to be happening here.
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1771577
Title:
InventoryInUse exceptions with ironic virt driver
Status in OpenStack Compute (nova):
New
Bug description:
Error can be seen here -- http://logs.openstack.org/39/554439/20/check
/ironic-tempest-dsvm-ipa-wholedisk-bios-agent_ipmitool-
tinyipa/e3c2ae3/logs/screen-n-cpu.txt.gz#_May_15_14_51_20_013490 :
May 15 14:51:20.013490 ubuntu-xenial-inap-mtl01-0004032002 nova-compute[23854]: ERROR nova.compute.manager [None req-16f0955f-21d4-40b5-9f4f-14f6e2347ad5 service nova] Error updating resources for node 6b051502-a72e-4d72-a48a-23eef70f708f.: InventoryInUse: Inventory for ''CUSTOM_BAREMETAL'' on resource provider '2ec955b6-7aa2-4838-9dd6-fd9b279bff1e' in use.
May 15 14:51:20.013788 ubuntu-xenial-inap-mtl01-0004032002 nova-compute[23854]: ERROR nova.compute.manager Traceback (most recent call last):
May 15 14:51:20.014078 ubuntu-xenial-inap-mtl01-0004032002 nova-compute[23854]: ERROR nova.compute.manager File "/opt/stack/new/nova/nova/compute/manager.py", line 7343, in update_available_resource_for_node
May 15 14:51:20.014359 ubuntu-xenial-inap-mtl01-0004032002 nova-compute[23854]: ERROR nova.compute.manager rt.update_available_resource(context, nodename)
May 15 14:51:20.014679 ubuntu-xenial-inap-mtl01-0004032002 nova-compute[23854]: ERROR nova.compute.manager File "/opt/stack/new/nova/nova/compute/resource_tracker.py", line 680, in update_available_resource
May 15 14:51:20.014966 ubuntu-xenial-inap-mtl01-0004032002 nova-compute[23854]: ERROR nova.compute.manager self._update_available_resource(context, resources)
May 15 14:51:20.015242 ubuntu-xenial-inap-mtl01-0004032002 nova-compute[23854]: ERROR nova.compute.manager File "/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 274, in inner
May 15 14:51:20.015547 ubuntu-xenial-inap-mtl01-0004032002 nova-compute[23854]: ERROR nova.compute.manager return f(*args, **kwargs)
May 15 14:51:20.015846 ubuntu-xenial-inap-mtl01-0004032002 nova-compute[23854]: ERROR nova.compute.manager File "/opt/stack/new/nova/nova/compute/resource_tracker.py", line 704, in _update_available_resource
May 15 14:51:20.016135 ubuntu-xenial-inap-mtl01-0004032002 nova-compute[23854]: ERROR nova.compute.manager self._init_compute_node(context, resources)
May 15 14:51:20.016413 ubuntu-xenial-inap-mtl01-0004032002 nova-compute[23854]: ERROR nova.compute.manager File "/opt/stack/new/nova/nova/compute/resource_tracker.py", line 561, in _init_compute_node
May 15 14:51:20.016692 ubuntu-xenial-inap-mtl01-0004032002 nova-compute[23854]: ERROR nova.compute.manager self._update(context, cn)
May 15 14:51:20.016878 ubuntu-xenial-inap-mtl01-0004032002 nova-compute[23854]: ERROR nova.compute.manager File "/opt/stack/new/nova/nova/compute/resource_tracker.py", line 896, in _update
May 15 14:51:20.017048 ubuntu-xenial-inap-mtl01-0004032002 nova-compute[23854]: ERROR nova.compute.manager inv_data,
May 15 14:51:20.017226 ubuntu-xenial-inap-mtl01-0004032002 nova-compute[23854]: ERROR nova.compute.manager File "/opt/stack/new/nova/nova/scheduler/client/__init__.py", line 68, in set_inventory_for_provider
May 15 14:51:20.017425 ubuntu-xenial-inap-mtl01-0004032002 nova-compute[23854]: ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid,
May 15 14:51:20.017651 ubuntu-xenial-inap-mtl01-0004032002 nova-compute[23854]: ERROR nova.compute.manager File "/opt/stack/new/nova/nova/scheduler/client/__init__.py", line 37, in __run_method
May 15 14:51:20.017924 ubuntu-xenial-inap-mtl01-0004032002 nova-compute[23854]: ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs)
May 15 14:51:20.018196 ubuntu-xenial-inap-mtl01-0004032002 nova-compute[23854]: ERROR nova.compute.manager File "/opt/stack/new/nova/nova/scheduler/client/report.py", line 1056, in set_inventory_for_provider
May 15 14:51:20.018475 ubuntu-xenial-inap-mtl01-0004032002 nova-compute[23854]: ERROR nova.compute.manager self._update_inventory(context, rp_uuid, inv_data)
May 15 14:51:20.018701 ubuntu-xenial-inap-mtl01-0004032002 nova-compute[23854]: ERROR nova.compute.manager File "/opt/stack/new/nova/nova/scheduler/client/report.py", line 66, in wrapper
May 15 14:51:20.018875 ubuntu-xenial-inap-mtl01-0004032002 nova-compute[23854]: ERROR nova.compute.manager return f(self, *a, **k)
May 15 14:51:20.019092 ubuntu-xenial-inap-mtl01-0004032002 nova-compute[23854]: ERROR nova.compute.manager File "/opt/stack/new/nova/nova/scheduler/client/report.py", line 986, in _update_inventory
May 15 14:51:20.019270 ubuntu-xenial-inap-mtl01-0004032002 nova-compute[23854]: ERROR nova.compute.manager if self._update_inventory_attempt(context, rp_uuid, inv_data):
May 15 14:51:20.019460 ubuntu-xenial-inap-mtl01-0004032002 nova-compute[23854]: ERROR nova.compute.manager File "/opt/stack/new/nova/nova/scheduler/client/report.py", line 928, in _update_inventory_attempt
May 15 14:51:20.019621 ubuntu-xenial-inap-mtl01-0004032002 nova-compute[23854]: ERROR nova.compute.manager resource_provider=rp_uuid,
May 15 14:51:20.019805 ubuntu-xenial-inap-mtl01-0004032002 nova-compute[23854]: ERROR nova.compute.manager InventoryInUse: Inventory for ''CUSTOM_BAREMETAL'' on resource provider '2ec955b6-7aa2-4838-9dd6-fd9b279bff1e' in use.
May 15 14:51:20.020018 ubuntu-xenial-inap-mtl01-0004032002 nova-compute[23854]: ERROR nova.compute.manager
I think this is happening as we have the
compute.manager._update_resource_tracker call before the instance
allocation deletion in the compute.manager._complete_deletion, so it
introduces two problems:
1. if this _update_resource_tracker has out of date view from ironic virt driver node cache (as update_available_resource periodic happened before instance tear down started in ironic), we will delete the allocation without deleting the inventory, making the node resources 'free' until next update_available_resource periodic run
2. if _update_resource_tracker has up-to-date view from node cache, we will try to delete the inventory before deleting the allocation, which is what seems to be happening here.
To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1771577/+subscriptions
Follow ups