yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #58121
[Bug 1636338] [NEW] Numa topology not calculated for instance with numa_topology after upgrading to Mitaka
Public bug reported:
This is related to this bug https://bugs.launchpad.net/nova/+bug/1596119
After upgrading to Mitaka with the above patch, a new bug surfaced. The bug is related to InstanceNUMACell having cpu_policy set to None. This causes cpu_pinning_requested to always return False.
https://github.com/openstack/nova/blob/master/nova/objects/instance_numa_topology.py#L112
This will then trick computes with old NUMA instances into thinking that
nothing is pinned, causing new instances with cpu_policy set to
CPUAllocationPolicy.DEDICATED to potentially get scheduled on the same
NUMA zone.
** Affects: nova
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1636338
Title:
Numa topology not calculated for instance with numa_topology after
upgrading to Mitaka
Status in OpenStack Compute (nova):
New
Bug description:
This is related to this bug
https://bugs.launchpad.net/nova/+bug/1596119
After upgrading to Mitaka with the above patch, a new bug surfaced. The bug is related to InstanceNUMACell having cpu_policy set to None. This causes cpu_pinning_requested to always return False.
https://github.com/openstack/nova/blob/master/nova/objects/instance_numa_topology.py#L112
This will then trick computes with old NUMA instances into thinking
that nothing is pinned, causing new instances with cpu_policy set to
CPUAllocationPolicy.DEDICATED to potentially get scheduled on the same
NUMA zone.
To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1636338/+subscriptions
Follow ups