yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #46933
[Bug 1549984] [NEW] PCI devices claimed on compute node during _claim_test()
Public bug reported:
The nova.compute.claims.Claim object is used to test whether a set of
requested resources can be satisfied by the compute node. In the
constructor of the Claim object, the Claim._claim_test() object is
called:
def __init__(self, context, instance, tracker, resources, overhead=None,
limits=None):
super(Claim, self).__init__()
<snip>
# Check claim at constructor to avoid mess code
# Raise exception ComputeResourcesUnavailable if claim failed
self._claim_test(resources, limits)
If we take a look at _claim_test(), we see pretty clearly that resources
are NOT supposed to be actually claimed -- instead, the method should
only *check* to see if the request can be fulfilled:
def _claim_test(self, resources, limits=None):
"""Test if this claim can be satisfied given available resources and
optional oversubscription limits
This should be called before the compute node actually consumes the
resources required to execute the claim.
:param resources: available local compute node resources
:returns: Return true if resources are available to claim.
"""
<snip>
reasons = [self._test_memory(resources, memory_mb_limit),
self._test_disk(resources, disk_gb_limit),
self._test_vcpus(resources, vcpus_limit),
self._test_numa_topology(resources, numa_topology_limit),
self._test_pci()]
reasons = reasons + self._test_ext_resources(limits)
reasons = [r for r in reasons if r is not None]
if len(reasons) > 0:
raise exception.ComputeResourcesUnavailable(reason=
"; ".join(reasons))
Unfortunately, the PCI devices are *actually* claimed in the _test_pci()
method:
def _test_pci(self):
pci_requests = objects.InstancePCIRequests.get_by_instance_uuid(
self.context, self.instance.uuid)
if pci_requests.requests:
devs = self.tracker.pci_tracker.claim_instance(self.context,
self.instance)
if not devs:
return _('Claim pci failed.')
What this means is that if an instance is attempted to be launched on a
compute node and that instance has PCI requests that can be satisfied by
the compute host, but say, there isn't enough available RAM on the node,
the Claim will raise ComputeResourcesUnavailable which will trigger a
Retry operation to the scheduler, but the PCI devices will have already
been marked as claimed by that instance in the PCI device tracker:
devs = self.tracker.pci_tracker.claim_instance(self.context,
self.instance)
The above code actually marks one or more PCI devices on the compute
host as claimed for the instance. This introduces inconsistent state
into the system. Making things worse is the fact that the
nova.pci.manager.PciDevTracker object uses the
nova.pci.stats.PciDevStats object for tracking consumed quantities of
"pools" of the PCI device types and both the stats aggregation AND the
PciDevTracker.pci_devs PciDeviceList object have their state changed
improperly.
** Affects: nova
Importance: Undecided
Status: New
** Tags: pci
** Tags added: pci
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1549984
Title:
PCI devices claimed on compute node during _claim_test()
Status in OpenStack Compute (nova):
New
Bug description:
The nova.compute.claims.Claim object is used to test whether a set of
requested resources can be satisfied by the compute node. In the
constructor of the Claim object, the Claim._claim_test() object is
called:
def __init__(self, context, instance, tracker, resources, overhead=None,
limits=None):
super(Claim, self).__init__()
<snip>
# Check claim at constructor to avoid mess code
# Raise exception ComputeResourcesUnavailable if claim failed
self._claim_test(resources, limits)
If we take a look at _claim_test(), we see pretty clearly that
resources are NOT supposed to be actually claimed -- instead, the
method should only *check* to see if the request can be fulfilled:
def _claim_test(self, resources, limits=None):
"""Test if this claim can be satisfied given available resources and
optional oversubscription limits
This should be called before the compute node actually consumes the
resources required to execute the claim.
:param resources: available local compute node resources
:returns: Return true if resources are available to claim.
"""
<snip>
reasons = [self._test_memory(resources, memory_mb_limit),
self._test_disk(resources, disk_gb_limit),
self._test_vcpus(resources, vcpus_limit),
self._test_numa_topology(resources, numa_topology_limit),
self._test_pci()]
reasons = reasons + self._test_ext_resources(limits)
reasons = [r for r in reasons if r is not None]
if len(reasons) > 0:
raise exception.ComputeResourcesUnavailable(reason=
"; ".join(reasons))
Unfortunately, the PCI devices are *actually* claimed in the
_test_pci() method:
def _test_pci(self):
pci_requests = objects.InstancePCIRequests.get_by_instance_uuid(
self.context, self.instance.uuid)
if pci_requests.requests:
devs = self.tracker.pci_tracker.claim_instance(self.context,
self.instance)
if not devs:
return _('Claim pci failed.')
What this means is that if an instance is attempted to be launched on
a compute node and that instance has PCI requests that can be
satisfied by the compute host, but say, there isn't enough available
RAM on the node, the Claim will raise ComputeResourcesUnavailable
which will trigger a Retry operation to the scheduler, but the PCI
devices will have already been marked as claimed by that instance in
the PCI device tracker:
devs = self.tracker.pci_tracker.claim_instance(self.context,
self.instance)
The above code actually marks one or more PCI devices on the compute
host as claimed for the instance. This introduces inconsistent state
into the system. Making things worse is the fact that the
nova.pci.manager.PciDevTracker object uses the
nova.pci.stats.PciDevStats object for tracking consumed quantities of
"pools" of the PCI device types and both the stats aggregation AND the
PciDevTracker.pci_devs PciDeviceList object have their state changed
improperly.
To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1549984/+subscriptions
Follow ups