yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #87915
[Bug 1953359] Re: update_available_resource periodic fails with exception.CPUPinningInvalid if there is incoming post-migrating migration with cpu pinning
*** This bug is a duplicate of bug 1952915 ***
https://bugs.launchpad.net/bugs/1952915
Reviewed: https://review.opendev.org/c/openstack/nova/+/820549
Committed: https://opendev.org/openstack/nova/commit/32c1044d86a8d02712c8e3abdf8b3e4cff234a9c
Submitter: "Zuul (22348)"
Branch: master
commit 32c1044d86a8d02712c8e3abdf8b3e4cff234a9c
Author: Balazs Gibizer <balazs.gibizer@xxxxxxxx>
Date: Mon Dec 6 17:06:51 2021 +0100
[rt] Apply migration context for incoming migrations
There is a race condition between an incoming resize and an
update_available_resource periodic in the resource tracker. The race
window starts when the resize_instance RPC finishes and ends when the
finish_resize compute RPC finally applies the migration context on the
instance.
In the race window, if the update_available_resource periodic is run on
the destination node, then it will see the instance as being tracked on
this host as the instance.node is already pointing to the dest. But the
instance.numa_topology still points to the source host topology as the
migration context is not applied yet. This leads to CPU pinning error if
the source topology does not fit to the dest topology. Also it stops the
periodic task and leaves the tracker in an inconsistent state. The
inconsistent state only cleanup up after the periodic is run outside of
the race window.
This patch applies the migration context temporarily to the specific
instances during the periodic to keep resource accounting correct.
Change-Id: Icaad155e22c9e2d86e464a0deb741c73f0dfb28a
Closes-Bug: #1953359
Closes-Bug: #1952915
** Changed in: nova
Status: In Progress => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1953359
Title:
update_available_resource periodic fails with
exception.CPUPinningInvalid if there is incoming post-migrating
migration with cpu pinning
Status in OpenStack Compute (nova):
Fix Released
Status in OpenStack Compute (nova) victoria series:
In Progress
Status in OpenStack Compute (nova) wallaby series:
In Progress
Status in OpenStack Compute (nova) xena series:
New
Bug description:
The update_available_resource() periodic task in the compute fails
with exception.CPUPinningInvalid exception (and stop processing the
rest of the instances) if there is an incoming migration (or resize or
evacuation) that is in post-migrating state (not yet executed
finish_resize) and the instance has CPU pinning.
Reproduce:
* build a multinode env with dedicated cpus and cpu pinning configured
* configure the update_available_resource to run frequently (just to ease the reproduction of the race) (e.g. set [DEFAULT]update_resources_interval = 10)
* create inst1 on the first node and create inst2 on the second node both with requesting one pinned cpu
* check that inst1 pinned to the same pcpu id on node1 as inst2 on node2
* slow down the processing on finish_resize messages in the system to ease the reproduction of the race (e.g. inject sleep or load rabbit etc.)
* migrate inst1 to node2
If you are managed to hit the case when the periodic runs on node2
just after the resize_claim of inst1 finished but the finish_resize
RPC call of inst1 is not processed (the migration context is not
applied to the instance and the migration is not in finished state but
in post-migration) then you will see a CPU pinning conflict. It is
because the resource tracker already tracks the incoming instance [1]
(the host and node is set in resize_instance already[2]) but the
instance still not have the migration context applied (as it is only
done in finish_resize[3]) so the instance.numa_topology still points
to the source topology.
Reproduced both in stable/victoria downstream and in latest master in
an upstream devstack.
2021-12-06 15:07:18,013 ERROR [nova.compute.manager] Error updating resources for node compute2.
Traceback (most recent call last):
File "/root/rtox/nova/functional-py38/nova/compute/manager.py", line 10011, in _update_available_resource_for_node
self.rt.update_available_resource(context, nodename,
File "/root/rtox/nova/functional-py38/nova/compute/resource_tracker.py", line 895, in update_available_resource
self._update_available_resource(context, resources, startup=startup)
File "/root/rtox/nova/functional-py38/.tox/functional-py38/lib/python3.8/site-packages/oslo_concurrency/lockutils.py", line 391, in inner
return f(*args, **kwargs)
File "/root/rtox/nova/functional-py38/nova/compute/resource_tracker.py", line 936, in _update_available_resource
instance_by_uuid = self._update_usage_from_instances(
File "/root/rtox/nova/functional-py38/nova/compute/resource_tracker.py", line 1500, in _update_usage_from_instances
self._update_usage_from_instance(context, instance, nodename)
File "/root/rtox/nova/functional-py38/nova/compute/resource_tracker.py", line 1463, in _update_usage_from_instance
self._update_usage(self._get_usage_dict(instance, instance),
File "/root/rtox/nova/functional-py38/nova/compute/resource_tracker.py", line 1268, in _update_usage
cn.numa_topology = hardware.numa_usage_from_instance_numa(
File "/root/rtox/nova/functional-py38/nova/virt/hardware.py", line 2382, in numa_usage_from_instance_numa
new_cell.pin_cpus(pinned_cpus)
File "/root/rtox/nova/functional-py38/nova/objects/numa.py", line 95, in pin_cpus
raise exception.CPUPinningInvalid(requested=list(cpus),
nova.exception.CPUPinningInvalid: CPU set to pin [0] must be a subset of free CPU set [1]
[1] https://github.com/openstack/nova/blob/7670303aabe16d1d7c25e411d7bd413aee7fdcf3/nova/compute/resource_tracker.py#L928-L929
[2] https://github.com/openstack/nova/blob/7670303aabe16d1d7c25e411d7bd413aee7fdcf3/nova/compute/manager.py#L5639-L5653
[3] https://github.com/openstack/nova/blob/7670303aabe16d1d7c25e411d7bd413aee7fdcf3/nova/compute/manager.py#L5780
To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1953359/+subscriptions
References