← Back to team overview

yahoo-eng-team team mailing list archive

[Bug 1836945] [NEW] Deleting a CPU-pinned instance after changing vcpu_pin_set causes it to go to ERROR

 

Public bug reported:

Description
===========

If you boot an instance with pinned CPUs (for example by using the
'dedicated' CPU policy), change the vcpu_pin_set option on its compute
host, then attempt to delete the instance, it will ERROR out instead of
deleting successfully. Subsequent delete attempts work.

Steps to reproduce
==================

1. Configure vcpu_pin_set in nova-cpu.conf:
   [DEFAULT]
   vcpu_pin_set = 0,1

2. Create a flavor with a 'dedicated' CPU policy:
   openstack flavor create --ram 256 --disk 1 --vcpus 2 dedicated
   openstack flavor set --property hw:cpu_policy=dedicated dedicated

3. Boot a VM with that flavor:
   nova boot --nic none \
      --flavor <dedicated UUID> \
      --image 8288bd81-eb26-419a-8d4e-4481da137fd6 test

4. Change vcpu_pin_set:
   [DEFAULT]
   vcpu_pin_set = 3,4

5. Delete the instance
   nova delete test

Expected result
===============

The insance deletes successfully.

Actual result
=============

The instance goes into ERROR.

Environment
===========

master

Logs & Configs
==============

Traceback from nova-compute:

Jul 17 14:28:49 devstack-numa-allinone nova-compute[30309]: ERROR nova.compute.manager Traceback (most recent call last):
Jul 17 14:28:49 devstack-numa-allinone nova-compute[30309]: ERROR nova.compute.manager   File "/opt/stack/nova/nova/compute/manager.py", line 8304, in _update_available_resource_for_node
Jul 17 14:28:49 devstack-numa-allinone nova-compute[30309]: ERROR nova.compute.manager     startup=startup)
Jul 17 14:28:49 devstack-numa-allinone nova-compute[30309]: ERROR nova.compute.manager   File "/opt/stack/nova/nova/compute/resource_tracker.py", line 747, in update_available_resource
Jul 17 14:28:49 devstack-numa-allinone nova-compute[30309]: ERROR nova.compute.manager     self._update_available_resource(context, resources, startup=startup)
Jul 17 14:28:49 devstack-numa-allinone nova-compute[30309]: ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 328, in inner
Jul 17 14:28:49 devstack-numa-allinone nova-compute[30309]: ERROR nova.compute.manager     return f(*args, **kwargs)
Jul 17 14:28:49 devstack-numa-allinone nova-compute[30309]: ERROR nova.compute.manager   File "/opt/stack/nova/nova/compute/resource_tracker.py", line 788, in _update_available_resource
Jul 17 14:28:49 devstack-numa-allinone nova-compute[30309]: ERROR nova.compute.manager     context, instances, nodename)
Jul 17 14:28:49 devstack-numa-allinone nova-compute[30309]: ERROR nova.compute.manager   File "/opt/stack/nova/nova/compute/resource_tracker.py", line 1327, in _update_usage_from_instances
Jul 17 14:28:49 devstack-numa-allinone nova-compute[30309]: ERROR nova.compute.manager     self._update_usage_from_instance(context, instance, nodename)
Jul 17 14:28:49 devstack-numa-allinone nova-compute[30309]: ERROR nova.compute.manager   File "/opt/stack/nova/nova/compute/resource_tracker.py", line 1291, in _update_usage_from_instance
Jul 17 14:28:49 devstack-numa-allinone nova-compute[30309]: ERROR nova.compute.manager     nodename, sign=sign)
Jul 17 14:28:49 devstack-numa-allinone nova-compute[30309]: ERROR nova.compute.manager   File "/opt/stack/nova/nova/compute/resource_tracker.py", line 1107, in _update_usage
Jul 17 14:28:49 devstack-numa-allinone nova-compute[30309]: ERROR nova.compute.manager     cn, usage, free)
Jul 17 14:28:49 devstack-numa-allinone nova-compute[30309]: ERROR nova.compute.manager   File "/opt/stack/nova/nova/virt/hardware.py", line 2073, in get_host_numa_usage_from_instance
Jul 17 14:28:49 devstack-numa-allinone nova-compute[30309]: ERROR nova.compute.manager     host_numa_topology, instance_numa_topology, free=free))
Jul 17 14:28:49 devstack-numa-allinone nova-compute[30309]: ERROR nova.compute.manager   File "/opt/stack/nova/nova/virt/hardware.py", line 1929, in numa_usage_from_instances
Jul 17 14:28:49 devstack-numa-allinone nova-compute[30309]: ERROR nova.compute.manager     newcell.pin_cpus(pinned_cpus)
Jul 17 14:28:49 devstack-numa-allinone nova-compute[30309]: ERROR nova.compute.manager   File "/opt/stack/nova/nova/objects/numa.py", line 98, in pin_cpus
Jul 17 14:28:49 devstack-numa-allinone nova-compute[30309]: ERROR nova.compute.manager     cpuset=list(self.cpuset))
Jul 17 14:28:49 devstack-numa-allinone nova-compute[30309]: ERROR nova.compute.manager CPUPinningUnknown: CPU set to pin [0, 1] must be a subset of known CPU set []

** Affects: nova
     Importance: Undecided
         Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1836945

Title:
  Deleting a CPU-pinned instance after changing vcpu_pin_set causes it
  to go to ERROR

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===========

  If you boot an instance with pinned CPUs (for example by using the
  'dedicated' CPU policy), change the vcpu_pin_set option on its compute
  host, then attempt to delete the instance, it will ERROR out instead
  of deleting successfully. Subsequent delete attempts work.

  Steps to reproduce
  ==================

  1. Configure vcpu_pin_set in nova-cpu.conf:
     [DEFAULT]
     vcpu_pin_set = 0,1

  2. Create a flavor with a 'dedicated' CPU policy:
     openstack flavor create --ram 256 --disk 1 --vcpus 2 dedicated
     openstack flavor set --property hw:cpu_policy=dedicated dedicated

  3. Boot a VM with that flavor:
     nova boot --nic none \
        --flavor <dedicated UUID> \
        --image 8288bd81-eb26-419a-8d4e-4481da137fd6 test

  4. Change vcpu_pin_set:
     [DEFAULT]
     vcpu_pin_set = 3,4

  5. Delete the instance
     nova delete test

  Expected result
  ===============

  The insance deletes successfully.

  Actual result
  =============

  The instance goes into ERROR.

  Environment
  ===========

  master

  Logs & Configs
  ==============

  Traceback from nova-compute:

  Jul 17 14:28:49 devstack-numa-allinone nova-compute[30309]: ERROR nova.compute.manager Traceback (most recent call last):
  Jul 17 14:28:49 devstack-numa-allinone nova-compute[30309]: ERROR nova.compute.manager   File "/opt/stack/nova/nova/compute/manager.py", line 8304, in _update_available_resource_for_node
  Jul 17 14:28:49 devstack-numa-allinone nova-compute[30309]: ERROR nova.compute.manager     startup=startup)
  Jul 17 14:28:49 devstack-numa-allinone nova-compute[30309]: ERROR nova.compute.manager   File "/opt/stack/nova/nova/compute/resource_tracker.py", line 747, in update_available_resource
  Jul 17 14:28:49 devstack-numa-allinone nova-compute[30309]: ERROR nova.compute.manager     self._update_available_resource(context, resources, startup=startup)
  Jul 17 14:28:49 devstack-numa-allinone nova-compute[30309]: ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 328, in inner
  Jul 17 14:28:49 devstack-numa-allinone nova-compute[30309]: ERROR nova.compute.manager     return f(*args, **kwargs)
  Jul 17 14:28:49 devstack-numa-allinone nova-compute[30309]: ERROR nova.compute.manager   File "/opt/stack/nova/nova/compute/resource_tracker.py", line 788, in _update_available_resource
  Jul 17 14:28:49 devstack-numa-allinone nova-compute[30309]: ERROR nova.compute.manager     context, instances, nodename)
  Jul 17 14:28:49 devstack-numa-allinone nova-compute[30309]: ERROR nova.compute.manager   File "/opt/stack/nova/nova/compute/resource_tracker.py", line 1327, in _update_usage_from_instances
  Jul 17 14:28:49 devstack-numa-allinone nova-compute[30309]: ERROR nova.compute.manager     self._update_usage_from_instance(context, instance, nodename)
  Jul 17 14:28:49 devstack-numa-allinone nova-compute[30309]: ERROR nova.compute.manager   File "/opt/stack/nova/nova/compute/resource_tracker.py", line 1291, in _update_usage_from_instance
  Jul 17 14:28:49 devstack-numa-allinone nova-compute[30309]: ERROR nova.compute.manager     nodename, sign=sign)
  Jul 17 14:28:49 devstack-numa-allinone nova-compute[30309]: ERROR nova.compute.manager   File "/opt/stack/nova/nova/compute/resource_tracker.py", line 1107, in _update_usage
  Jul 17 14:28:49 devstack-numa-allinone nova-compute[30309]: ERROR nova.compute.manager     cn, usage, free)
  Jul 17 14:28:49 devstack-numa-allinone nova-compute[30309]: ERROR nova.compute.manager   File "/opt/stack/nova/nova/virt/hardware.py", line 2073, in get_host_numa_usage_from_instance
  Jul 17 14:28:49 devstack-numa-allinone nova-compute[30309]: ERROR nova.compute.manager     host_numa_topology, instance_numa_topology, free=free))
  Jul 17 14:28:49 devstack-numa-allinone nova-compute[30309]: ERROR nova.compute.manager   File "/opt/stack/nova/nova/virt/hardware.py", line 1929, in numa_usage_from_instances
  Jul 17 14:28:49 devstack-numa-allinone nova-compute[30309]: ERROR nova.compute.manager     newcell.pin_cpus(pinned_cpus)
  Jul 17 14:28:49 devstack-numa-allinone nova-compute[30309]: ERROR nova.compute.manager   File "/opt/stack/nova/nova/objects/numa.py", line 98, in pin_cpus
  Jul 17 14:28:49 devstack-numa-allinone nova-compute[30309]: ERROR nova.compute.manager     cpuset=list(self.cpuset))
  Jul 17 14:28:49 devstack-numa-allinone nova-compute[30309]: ERROR nova.compute.manager CPUPinningUnknown: CPU set to pin [0, 1] must be a subset of known CPU set []

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1836945/+subscriptions