← Back to team overview

yahoo-eng-team team mailing list archive

[Bug 1953359] [NEW] update_available_resource periodic fails with exception.CPUPinningInvalid if there is incoming post-migrating migration with cpu pinning

 

Public bug reported:

The update_available_resource() periodic task in the compute fails with
exception.CPUPinningInvalid exception (and stop processing the rest of
the instances) if there is an incoming migration (or resize or
evacuation) that is in post-migrating state (not yet executed
finish_resize) and the instance has CPU pinning.

Reproduce:
* build a multinode env with dedicated cpus and cpu pinning configured
* configure the update_available_resource to run frequently (just to ease the reproduction of the race) (e.g. set [DEFAULT]update_resources_interval = 10)
* create inst1 on the first node and create inst2 on the second node both with requesting one pinned cpu
* check that inst1 pinned to the same pcpu id on node1 as inst2 on node2
* slow down the processing on finish_resize messages in the system to ease the reproduction of the race (e.g. inject sleep or load rabbit etc.)
* migrate inst1 to node2

If you are managed to hit the case when the periodic runs on node2 just
after the resize_claim of inst1 finished but the finish_resize RPC call
of inst1 is not processed (the migration context is not applied to the
instance and the migration is not in finished state but in post-
migration) then you will see a CPU pinning conflict. It is because the
resource tracker already tracks the incoming instance [1] (the host and
node is set in resize_instance already[2]) but the instance still not
have the migration context applied (as it is only done in
finish_resize[3]) so the instance.numa_topology still points to the
source topology.

Reproduced both in stable/victoria downstream and in latest master in an
upstream devstack.


2021-12-06 15:07:18,013 ERROR [nova.compute.manager] Error updating resources for node compute2.
Traceback (most recent call last):
  File "/root/rtox/nova/functional-py38/nova/compute/manager.py", line 10011, in _update_available_resource_for_node
    self.rt.update_available_resource(context, nodename,
  File "/root/rtox/nova/functional-py38/nova/compute/resource_tracker.py", line 895, in update_available_resource
    self._update_available_resource(context, resources, startup=startup)
  File "/root/rtox/nova/functional-py38/.tox/functional-py38/lib/python3.8/site-packages/oslo_concurrency/lockutils.py", line 391, in inner
    return f(*args, **kwargs)
  File "/root/rtox/nova/functional-py38/nova/compute/resource_tracker.py", line 936, in _update_available_resource
    instance_by_uuid = self._update_usage_from_instances(
  File "/root/rtox/nova/functional-py38/nova/compute/resource_tracker.py", line 1500, in _update_usage_from_instances
    self._update_usage_from_instance(context, instance, nodename)
  File "/root/rtox/nova/functional-py38/nova/compute/resource_tracker.py", line 1463, in _update_usage_from_instance
    self._update_usage(self._get_usage_dict(instance, instance),
  File "/root/rtox/nova/functional-py38/nova/compute/resource_tracker.py", line 1268, in _update_usage
    cn.numa_topology = hardware.numa_usage_from_instance_numa(
  File "/root/rtox/nova/functional-py38/nova/virt/hardware.py", line 2382, in numa_usage_from_instance_numa
    new_cell.pin_cpus(pinned_cpus)
  File "/root/rtox/nova/functional-py38/nova/objects/numa.py", line 95, in pin_cpus
    raise exception.CPUPinningInvalid(requested=list(cpus),
nova.exception.CPUPinningInvalid: CPU set to pin [0] must be a subset of free CPU set [1]

[1] https://github.com/openstack/nova/blob/7670303aabe16d1d7c25e411d7bd413aee7fdcf3/nova/compute/resource_tracker.py#L928-L929
[2] https://github.com/openstack/nova/blob/7670303aabe16d1d7c25e411d7bd413aee7fdcf3/nova/compute/manager.py#L5639-L5653
[3] https://github.com/openstack/nova/blob/7670303aabe16d1d7c25e411d7bd413aee7fdcf3/nova/compute/manager.py#L5780

** Affects: nova
     Importance: Undecided
         Status: New


** Tags: compute numa resize resource-tracker

** Tags added: numa

** Tags added: compute resource-tracker

** Tags added: resize

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1953359

Title:
  update_available_resource periodic fails with
  exception.CPUPinningInvalid if there is incoming post-migrating
  migration with cpu pinning

Status in OpenStack Compute (nova):
  New

Bug description:
  The update_available_resource() periodic task in the compute fails
  with exception.CPUPinningInvalid exception (and stop processing the
  rest of the instances) if there is an incoming migration (or resize or
  evacuation) that is in post-migrating state (not yet executed
  finish_resize) and the instance has CPU pinning.

  Reproduce:
  * build a multinode env with dedicated cpus and cpu pinning configured
  * configure the update_available_resource to run frequently (just to ease the reproduction of the race) (e.g. set [DEFAULT]update_resources_interval = 10)
  * create inst1 on the first node and create inst2 on the second node both with requesting one pinned cpu
  * check that inst1 pinned to the same pcpu id on node1 as inst2 on node2
  * slow down the processing on finish_resize messages in the system to ease the reproduction of the race (e.g. inject sleep or load rabbit etc.)
  * migrate inst1 to node2

  If you are managed to hit the case when the periodic runs on node2
  just after the resize_claim of inst1 finished but the finish_resize
  RPC call of inst1 is not processed (the migration context is not
  applied to the instance and the migration is not in finished state but
  in post-migration) then you will see a CPU pinning conflict. It is
  because the resource tracker already tracks the incoming instance [1]
  (the host and node is set in resize_instance already[2]) but the
  instance still not have the migration context applied (as it is only
  done in finish_resize[3]) so the instance.numa_topology still points
  to the source topology.

  Reproduced both in stable/victoria downstream and in latest master in
  an upstream devstack.

  
  2021-12-06 15:07:18,013 ERROR [nova.compute.manager] Error updating resources for node compute2.
  Traceback (most recent call last):
    File "/root/rtox/nova/functional-py38/nova/compute/manager.py", line 10011, in _update_available_resource_for_node
      self.rt.update_available_resource(context, nodename,
    File "/root/rtox/nova/functional-py38/nova/compute/resource_tracker.py", line 895, in update_available_resource
      self._update_available_resource(context, resources, startup=startup)
    File "/root/rtox/nova/functional-py38/.tox/functional-py38/lib/python3.8/site-packages/oslo_concurrency/lockutils.py", line 391, in inner
      return f(*args, **kwargs)
    File "/root/rtox/nova/functional-py38/nova/compute/resource_tracker.py", line 936, in _update_available_resource
      instance_by_uuid = self._update_usage_from_instances(
    File "/root/rtox/nova/functional-py38/nova/compute/resource_tracker.py", line 1500, in _update_usage_from_instances
      self._update_usage_from_instance(context, instance, nodename)
    File "/root/rtox/nova/functional-py38/nova/compute/resource_tracker.py", line 1463, in _update_usage_from_instance
      self._update_usage(self._get_usage_dict(instance, instance),
    File "/root/rtox/nova/functional-py38/nova/compute/resource_tracker.py", line 1268, in _update_usage
      cn.numa_topology = hardware.numa_usage_from_instance_numa(
    File "/root/rtox/nova/functional-py38/nova/virt/hardware.py", line 2382, in numa_usage_from_instance_numa
      new_cell.pin_cpus(pinned_cpus)
    File "/root/rtox/nova/functional-py38/nova/objects/numa.py", line 95, in pin_cpus
      raise exception.CPUPinningInvalid(requested=list(cpus),
  nova.exception.CPUPinningInvalid: CPU set to pin [0] must be a subset of free CPU set [1]

  [1] https://github.com/openstack/nova/blob/7670303aabe16d1d7c25e411d7bd413aee7fdcf3/nova/compute/resource_tracker.py#L928-L929
  [2] https://github.com/openstack/nova/blob/7670303aabe16d1d7c25e411d7bd413aee7fdcf3/nova/compute/manager.py#L5639-L5653
  [3] https://github.com/openstack/nova/blob/7670303aabe16d1d7c25e411d7bd413aee7fdcf3/nova/compute/manager.py#L5780

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1953359/+subscriptions



Follow ups