yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #46283
[Bug 1545675] [NEW] Shelve/unshelve fails for pinned instance
Public bug reported:
It appears the shelve/unshelve operation does not work when an instance
is pinned. A CPUPinningInvalid exception is raised when one attempts to
do so.
CPUPinningInvalid: Cannot pin/unpin cpus [1] from the following
pinned set [0, 25]
---
# Steps
Testing was conducted on host containing a single-node, Fedora 23-based
(4.3.5-300.fc23.x86_64) OpenStack instance (built with DevStack). The
'12d224e' commit of Nova was used. The Tempest tests (commit 'e913b82')
were run using modified flavors, as seen below:
nova flavor-create m1.small_nfv 420 2048 0 2
nova flavor-create m1.medium_nfv 840 4096 0 4
nova flavor-key 420 set "hw:numa_nodes=2"
nova flavor-key 840 set "hw:numa_nodes=2"
nova flavor-key 420 set "hw:cpu_policy=dedicated"
nova flavor-key 840 set "hw:cpu_policy=dedicated"
cd $TEMPEST_DIR
cp etc/tempest.conf etc/tempest.conf.orig
sed -i "s/flavor_ref = .*/flavor_ref = 420/" etc/tempest.conf
sed -i "s/flavor_ref_alt = .*/flavor_ref_alt = 840/" etc/tempest.conf
The following tests were run:
* tempest.scenario.test_shelve_instance.TestShelveInstance.test_shelve_instance
* tempest.api.compute.servers.test_server_actions.ServerActionsTestJSON.test_shelve_unshelve_server
Like so:
./run_tempest.sh --
tempest.scenario.test_shelve_instance.TestShelveInstance.test_shelve_instance
# Expected Result
The tests should pass.
# Actual Result
The tests fail. Both failing tests result in similar error messages. The
error messages for both are given below.
{0}
tempest.scenario.test_shelve_instance.TestShelveInstance.test_shelve_instance
[36.713046s] ... FAILED
Setting instance vm_state to ERROR
Traceback (most recent call last):
File "/opt/stack/nova/nova/compute/manager.py", line 2474, in do_terminate_instance
self._delete_instance(context, instance, bdms, quotas)
File "/opt/stack/nova/nova/hooks.py", line 149, in inner
rv = f(*args, **kwargs)
File "/opt/stack/nova/nova/compute/manager.py", line 2437, in _delete_instance
quotas.rollback()
File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
self.force_reraise()
File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise
six.reraise(self.type_, self.value, self.tb)
File "/opt/stack/nova/nova/compute/manager.py", line 2432, in _delete_instance
self._update_resource_tracker(context, instance)
File "/opt/stack/nova/nova/compute/manager.py", line 751, in _update_resource_tracker
rt.update_usage(context, instance)
File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 271, in inner
return f(*args, **kwargs)
File "/opt/stack/nova/nova/compute/resource_tracker.py", line 376, in update_usage
self._update_usage_from_instance(context, instance)
File "/opt/stack/nova/nova/compute/resource_tracker.py", line 863, in _update_usage_from_instance
self._update_usage(instance, sign=sign)
File "/opt/stack/nova/nova/compute/resource_tracker.py", line 705, in _update_usage
self.compute_node, usage, free)
File "/opt/stack/nova/nova/virt/hardware.py", line 1441, in get_host_numa_usage_from_instance
host_numa_topology, instance_numa_topology, free=free))
File "/opt/stack/nova/nova/virt/hardware.py", line 1307, in numa_usage_from_instances
newcell.unpin_cpus(pinned_cpus)
File "/opt/stack/nova/nova/objects/numa.py", line 93, in unpin_cpus
pinned=list(self.pinned_cpus))
CPUPinningInvalid: Cannot pin/unpin cpus [0] from the following pinned set [1]
{0}
tempest.api.compute.servers.test_server_actions.ServerActionsTestJSON.test_shelve_unshelve_server
[29.131132s] ... ok
Traceback (most recent call last):
File "/opt/stack/nova/nova/compute/manager.py", line 2474, in do_terminate_instance
self._delete_instance(context, instance, bdms, quotas)
File "/opt/stack/nova/nova/hooks.py", line 149, in inner
rv = f(*args, **kwargs)
File "/opt/stack/nova/nova/compute/manager.py", line 2437, in _delete_instance
quotas.rollback()
File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
self.force_reraise()
File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise
six.reraise(self.type_, self.value, self.tb)
File "/opt/stack/nova/nova/compute/manager.py", line 2432, in _delete_instance
self._update_resource_tracker(context, instance)
File "/opt/stack/nova/nova/compute/manager.py", line 751, in _update_resource_tracker
rt.update_usage(context, instance)
File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 271, in inner
return f(*args, **kwargs)
File "/opt/stack/nova/nova/compute/resource_tracker.py", line 376, in update_usage
self._update_usage_from_instance(context, instance)
File "/opt/stack/nova/nova/compute/resource_tracker.py", line 863, in _update_usage_from_instance
self._update_usage(instance, sign=sign)
File "/opt/stack/nova/nova/compute/resource_tracker.py", line 705, in _update_usage
self.compute_node, usage, free)
File "/opt/stack/nova/nova/virt/hardware.py", line 1441, in get_host_numa_usage_from_instance
host_numa_topology, instance_numa_topology, free=free))
File "/opt/stack/nova/nova/virt/hardware.py", line 1307, in numa_usage_from_instances
newcell.unpin_cpus(pinned_cpus)
File "/opt/stack/nova/nova/objects/numa.py", line 93, in unpin_cpus
pinned=list(self.pinned_cpus))
CPUPinningInvalid: Cannot pin/unpin cpus [1] from the following pinned set [0, 25]
** Affects: nova
Importance: Undecided
Assignee: Stephen Finucane (sfinucan)
Status: In Progress
** Tags: libvirt numa
** Changed in: nova
Status: New => In Progress
** Changed in: nova
Assignee: (unassigned) => Stephen Finucane (sfinucan)
** Description changed:
It appears the shelve/unshelve operation does not work when an instance
is pinned. A CPUPinningInvalid exception is raised when one attempts to
do so.
- CPUPinningInvalid: Cannot pin/unpin cpus [1] from the following
+ CPUPinningInvalid: Cannot pin/unpin cpus [1] from the following
pinned set [0, 25]
---
# Steps
Testing was conducted on host containing a single-node, Fedora 23-based
(4.3.5-300.fc23.x86_64) OpenStack instance (built with DevStack). The
'12d224e' commit of Nova was used. The Tempest tests (commit 'e913b82')
were run using modified flavors, as seen below:
- nova flavor-create m1.small_nfv 420 2048 0 2
- nova flavor-create m1.medium_nfv 840 4096 0 4
- nova flavor-key 420 set "hw:numa_nodes=2"
- nova flavor-key 840 set "hw:numa_nodes=2"
- nova flavor-key 420 set "hw:cpu_policy=dedicated"
- nova flavor-key 840 set "hw:cpu_policy=dedicated"
+ nova flavor-create m1.small_nfv 420 2048 0 2
+ nova flavor-create m1.medium_nfv 840 4096 0 4
+ nova flavor-key 420 set "hw:numa_nodes=2"
+ nova flavor-key 840 set "hw:numa_nodes=2"
+ nova flavor-key 420 set "hw:cpu_policy=dedicated"
+ nova flavor-key 840 set "hw:cpu_policy=dedicated"
- cd $TEMPEST_DIR
- cp etc/tempest.conf etc/tempest.conf.orig
- sed -i "s/flavor_ref = .*/flavor_ref = 420/" etc/tempest.conf
- sed -i "s/flavor_ref_alt = .*/flavor_ref_alt = 840/" etc/tempest.conf
+ cd $TEMPEST_DIR
+ cp etc/tempest.conf etc/tempest.conf.orig
+ sed -i "s/flavor_ref = .*/flavor_ref = 420/" etc/tempest.conf
+ sed -i "s/flavor_ref_alt = .*/flavor_ref_alt = 840/" etc/tempest.conf
The following tests were run:
* tempest.scenario.test_shelve_instance.TestShelveInstance.test_shelve_instance
* tempest.api.compute.servers.test_server_actions.ServerActionsTestJSON.test_shelve_unshelve_server
+
+ Like so:
+
+ ./run_tempest.sh --
+ tempest.scenario.test_shelve_instance.TestShelveInstance.test_shelve_instance
# Expected Result
The tests should pass.
# Actual Result
The tests fail. Both failing tests result in similar error messages. The
error messages for both are given below.
{0}
tempest.scenario.test_shelve_instance.TestShelveInstance.test_shelve_instance
[36.713046s] ... FAILED
- Setting instance vm_state to ERROR
- Traceback (most recent call last):
- File "/opt/stack/nova/nova/compute/manager.py", line 2474, in do_terminate_instance
- self._delete_instance(context, instance, bdms, quotas)
- File "/opt/stack/nova/nova/hooks.py", line 149, in inner
- rv = f(*args, **kwargs)
- File "/opt/stack/nova/nova/compute/manager.py", line 2437, in _delete_instance
- quotas.rollback()
- File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
- self.force_reraise()
- File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise
- six.reraise(self.type_, self.value, self.tb)
- File "/opt/stack/nova/nova/compute/manager.py", line 2432, in _delete_instance
- self._update_resource_tracker(context, instance)
- File "/opt/stack/nova/nova/compute/manager.py", line 751, in _update_resource_tracker
- rt.update_usage(context, instance)
- File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 271, in inner
- return f(*args, **kwargs)
- File "/opt/stack/nova/nova/compute/resource_tracker.py", line 376, in update_usage
- self._update_usage_from_instance(context, instance)
- File "/opt/stack/nova/nova/compute/resource_tracker.py", line 863, in _update_usage_from_instance
- self._update_usage(instance, sign=sign)
- File "/opt/stack/nova/nova/compute/resource_tracker.py", line 705, in _update_usage
- self.compute_node, usage, free)
- File "/opt/stack/nova/nova/virt/hardware.py", line 1441, in get_host_numa_usage_from_instance
- host_numa_topology, instance_numa_topology, free=free))
- File "/opt/stack/nova/nova/virt/hardware.py", line 1307, in numa_usage_from_instances
- newcell.unpin_cpus(pinned_cpus)
- File "/opt/stack/nova/nova/objects/numa.py", line 93, in unpin_cpus
- pinned=list(self.pinned_cpus))
- CPUPinningInvalid: Cannot pin/unpin cpus [0] from the following pinned set [1]
+ Setting instance vm_state to ERROR
+ Traceback (most recent call last):
+ File "/opt/stack/nova/nova/compute/manager.py", line 2474, in do_terminate_instance
+ self._delete_instance(context, instance, bdms, quotas)
+ File "/opt/stack/nova/nova/hooks.py", line 149, in inner
+ rv = f(*args, **kwargs)
+ File "/opt/stack/nova/nova/compute/manager.py", line 2437, in _delete_instance
+ quotas.rollback()
+ File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
+ self.force_reraise()
+ File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise
+ six.reraise(self.type_, self.value, self.tb)
+ File "/opt/stack/nova/nova/compute/manager.py", line 2432, in _delete_instance
+ self._update_resource_tracker(context, instance)
+ File "/opt/stack/nova/nova/compute/manager.py", line 751, in _update_resource_tracker
+ rt.update_usage(context, instance)
+ File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 271, in inner
+ return f(*args, **kwargs)
+ File "/opt/stack/nova/nova/compute/resource_tracker.py", line 376, in update_usage
+ self._update_usage_from_instance(context, instance)
+ File "/opt/stack/nova/nova/compute/resource_tracker.py", line 863, in _update_usage_from_instance
+ self._update_usage(instance, sign=sign)
+ File "/opt/stack/nova/nova/compute/resource_tracker.py", line 705, in _update_usage
+ self.compute_node, usage, free)
+ File "/opt/stack/nova/nova/virt/hardware.py", line 1441, in get_host_numa_usage_from_instance
+ host_numa_topology, instance_numa_topology, free=free))
+ File "/opt/stack/nova/nova/virt/hardware.py", line 1307, in numa_usage_from_instances
+ newcell.unpin_cpus(pinned_cpus)
+ File "/opt/stack/nova/nova/objects/numa.py", line 93, in unpin_cpus
+ pinned=list(self.pinned_cpus))
+ CPUPinningInvalid: Cannot pin/unpin cpus [0] from the following pinned set [1]
{0}
tempest.api.compute.servers.test_server_actions.ServerActionsTestJSON.test_shelve_unshelve_server
[29.131132s] ... ok
- Traceback (most recent call last):
- File "/opt/stack/nova/nova/compute/manager.py", line 2474, in do_terminate_instance
- self._delete_instance(context, instance, bdms, quotas)
- File "/opt/stack/nova/nova/hooks.py", line 149, in inner
- rv = f(*args, **kwargs)
- File "/opt/stack/nova/nova/compute/manager.py", line 2437, in _delete_instance
- quotas.rollback()
- File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
- self.force_reraise()
- File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise
- six.reraise(self.type_, self.value, self.tb)
- File "/opt/stack/nova/nova/compute/manager.py", line 2432, in _delete_instance
- self._update_resource_tracker(context, instance)
- File "/opt/stack/nova/nova/compute/manager.py", line 751, in _update_resource_tracker
- rt.update_usage(context, instance)
- File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 271, in inner
- return f(*args, **kwargs)
- File "/opt/stack/nova/nova/compute/resource_tracker.py", line 376, in update_usage
- self._update_usage_from_instance(context, instance)
- File "/opt/stack/nova/nova/compute/resource_tracker.py", line 863, in _update_usage_from_instance
- self._update_usage(instance, sign=sign)
- File "/opt/stack/nova/nova/compute/resource_tracker.py", line 705, in _update_usage
- self.compute_node, usage, free)
- File "/opt/stack/nova/nova/virt/hardware.py", line 1441, in get_host_numa_usage_from_instance
- host_numa_topology, instance_numa_topology, free=free))
- File "/opt/stack/nova/nova/virt/hardware.py", line 1307, in numa_usage_from_instances
- newcell.unpin_cpus(pinned_cpus)
- File "/opt/stack/nova/nova/objects/numa.py", line 93, in unpin_cpus
- pinned=list(self.pinned_cpus))
- CPUPinningInvalid: Cannot pin/unpin cpus [1] from the following pinned set [0, 25]
+ Traceback (most recent call last):
+ File "/opt/stack/nova/nova/compute/manager.py", line 2474, in do_terminate_instance
+ self._delete_instance(context, instance, bdms, quotas)
+ File "/opt/stack/nova/nova/hooks.py", line 149, in inner
+ rv = f(*args, **kwargs)
+ File "/opt/stack/nova/nova/compute/manager.py", line 2437, in _delete_instance
+ quotas.rollback()
+ File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
+ self.force_reraise()
+ File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise
+ six.reraise(self.type_, self.value, self.tb)
+ File "/opt/stack/nova/nova/compute/manager.py", line 2432, in _delete_instance
+ self._update_resource_tracker(context, instance)
+ File "/opt/stack/nova/nova/compute/manager.py", line 751, in _update_resource_tracker
+ rt.update_usage(context, instance)
+ File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 271, in inner
+ return f(*args, **kwargs)
+ File "/opt/stack/nova/nova/compute/resource_tracker.py", line 376, in update_usage
+ self._update_usage_from_instance(context, instance)
+ File "/opt/stack/nova/nova/compute/resource_tracker.py", line 863, in _update_usage_from_instance
+ self._update_usage(instance, sign=sign)
+ File "/opt/stack/nova/nova/compute/resource_tracker.py", line 705, in _update_usage
+ self.compute_node, usage, free)
+ File "/opt/stack/nova/nova/virt/hardware.py", line 1441, in get_host_numa_usage_from_instance
+ host_numa_topology, instance_numa_topology, free=free))
+ File "/opt/stack/nova/nova/virt/hardware.py", line 1307, in numa_usage_from_instances
+ newcell.unpin_cpus(pinned_cpus)
+ File "/opt/stack/nova/nova/objects/numa.py", line 93, in unpin_cpus
+ pinned=list(self.pinned_cpus))
+ CPUPinningInvalid: Cannot pin/unpin cpus [1] from the following pinned set [0, 25]
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1545675
Title:
Shelve/unshelve fails for pinned instance
Status in OpenStack Compute (nova):
In Progress
Bug description:
It appears the shelve/unshelve operation does not work when an
instance is pinned. A CPUPinningInvalid exception is raised when one
attempts to do so.
CPUPinningInvalid: Cannot pin/unpin cpus [1] from the following
pinned set [0, 25]
---
# Steps
Testing was conducted on host containing a single-node, Fedora
23-based (4.3.5-300.fc23.x86_64) OpenStack instance (built with
DevStack). The '12d224e' commit of Nova was used. The Tempest tests
(commit 'e913b82') were run using modified flavors, as seen below:
nova flavor-create m1.small_nfv 420 2048 0 2
nova flavor-create m1.medium_nfv 840 4096 0 4
nova flavor-key 420 set "hw:numa_nodes=2"
nova flavor-key 840 set "hw:numa_nodes=2"
nova flavor-key 420 set "hw:cpu_policy=dedicated"
nova flavor-key 840 set "hw:cpu_policy=dedicated"
cd $TEMPEST_DIR
cp etc/tempest.conf etc/tempest.conf.orig
sed -i "s/flavor_ref = .*/flavor_ref = 420/" etc/tempest.conf
sed -i "s/flavor_ref_alt = .*/flavor_ref_alt = 840/" etc/tempest.conf
The following tests were run:
* tempest.scenario.test_shelve_instance.TestShelveInstance.test_shelve_instance
* tempest.api.compute.servers.test_server_actions.ServerActionsTestJSON.test_shelve_unshelve_server
Like so:
./run_tempest.sh --
tempest.scenario.test_shelve_instance.TestShelveInstance.test_shelve_instance
# Expected Result
The tests should pass.
# Actual Result
The tests fail. Both failing tests result in similar error messages.
The error messages for both are given below.
{0}
tempest.scenario.test_shelve_instance.TestShelveInstance.test_shelve_instance
[36.713046s] ... FAILED
Setting instance vm_state to ERROR
Traceback (most recent call last):
File "/opt/stack/nova/nova/compute/manager.py", line 2474, in do_terminate_instance
self._delete_instance(context, instance, bdms, quotas)
File "/opt/stack/nova/nova/hooks.py", line 149, in inner
rv = f(*args, **kwargs)
File "/opt/stack/nova/nova/compute/manager.py", line 2437, in _delete_instance
quotas.rollback()
File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
self.force_reraise()
File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise
six.reraise(self.type_, self.value, self.tb)
File "/opt/stack/nova/nova/compute/manager.py", line 2432, in _delete_instance
self._update_resource_tracker(context, instance)
File "/opt/stack/nova/nova/compute/manager.py", line 751, in _update_resource_tracker
rt.update_usage(context, instance)
File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 271, in inner
return f(*args, **kwargs)
File "/opt/stack/nova/nova/compute/resource_tracker.py", line 376, in update_usage
self._update_usage_from_instance(context, instance)
File "/opt/stack/nova/nova/compute/resource_tracker.py", line 863, in _update_usage_from_instance
self._update_usage(instance, sign=sign)
File "/opt/stack/nova/nova/compute/resource_tracker.py", line 705, in _update_usage
self.compute_node, usage, free)
File "/opt/stack/nova/nova/virt/hardware.py", line 1441, in get_host_numa_usage_from_instance
host_numa_topology, instance_numa_topology, free=free))
File "/opt/stack/nova/nova/virt/hardware.py", line 1307, in numa_usage_from_instances
newcell.unpin_cpus(pinned_cpus)
File "/opt/stack/nova/nova/objects/numa.py", line 93, in unpin_cpus
pinned=list(self.pinned_cpus))
CPUPinningInvalid: Cannot pin/unpin cpus [0] from the following pinned set [1]
{0}
tempest.api.compute.servers.test_server_actions.ServerActionsTestJSON.test_shelve_unshelve_server
[29.131132s] ... ok
Traceback (most recent call last):
File "/opt/stack/nova/nova/compute/manager.py", line 2474, in do_terminate_instance
self._delete_instance(context, instance, bdms, quotas)
File "/opt/stack/nova/nova/hooks.py", line 149, in inner
rv = f(*args, **kwargs)
File "/opt/stack/nova/nova/compute/manager.py", line 2437, in _delete_instance
quotas.rollback()
File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
self.force_reraise()
File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise
six.reraise(self.type_, self.value, self.tb)
File "/opt/stack/nova/nova/compute/manager.py", line 2432, in _delete_instance
self._update_resource_tracker(context, instance)
File "/opt/stack/nova/nova/compute/manager.py", line 751, in _update_resource_tracker
rt.update_usage(context, instance)
File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 271, in inner
return f(*args, **kwargs)
File "/opt/stack/nova/nova/compute/resource_tracker.py", line 376, in update_usage
self._update_usage_from_instance(context, instance)
File "/opt/stack/nova/nova/compute/resource_tracker.py", line 863, in _update_usage_from_instance
self._update_usage(instance, sign=sign)
File "/opt/stack/nova/nova/compute/resource_tracker.py", line 705, in _update_usage
self.compute_node, usage, free)
File "/opt/stack/nova/nova/virt/hardware.py", line 1441, in get_host_numa_usage_from_instance
host_numa_topology, instance_numa_topology, free=free))
File "/opt/stack/nova/nova/virt/hardware.py", line 1307, in numa_usage_from_instances
newcell.unpin_cpus(pinned_cpus)
File "/opt/stack/nova/nova/objects/numa.py", line 93, in unpin_cpus
pinned=list(self.pinned_cpus))
CPUPinningInvalid: Cannot pin/unpin cpus [1] from the following pinned set [0, 25]
To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1545675/+subscriptions
Follow ups