yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #94884
[Bug 2088831] [NEW] Server create with hw:mem_encryption fails with unified limits quotas
Public bug reported:
I discovered this issue while working on adding flavor scanning to
'nova-manage limits migrate_to_unified_limits' [1].
When unified limits is enabled ([quota]driver =
nova.quota.UnifiedLimitsDriver), the quota checking code calls
nova.scheduler.utils.resources_for_limits(flavor, is_bfv) to obtain a
list of resource classes for which quota limits need to be enforced.
This check however fails when the following exception is raised and the
API returns a 400 status code:
{"badRequest": {"code": 400, "message": "Memory encryption requested
by hw:mem_encryption extra spec in zqdxonddgkblwxarzpjj flavor but image
None doesn't have 'hw_firmware_type' property set to 'uefi' or volume-
backed instance was requested"}}
This is due to reuse of the
nova.virt.hardware.get_mem_encryption_constraint(flavor, image) method
in nova.scheduler.utils. In any case where the image metadata is not
properly populated in the request spec passed to
ResourceRequest.from_request_spec(request_spec),
nova.virt.hardware.get_mem_encryption_constraint(flavor, image) will
raise a FlavorImageConflict exception and cause things to fail.
Example from nova/scheduler/utils.py [2] where a "fake" RequestSpec is
used:
def _get_resources(flavor, is_bfv):
# create a fake RequestSpec as a wrapper to the caller
req_spec = objects.RequestSpec(flavor=flavor, is_bfv=is_bfv)
# TODO(efried): This method is currently only used from places that
# assume the compute node is the only resource provider. So for now, we
# just merge together all the resources specified in the flavor and pass
# them along. This will need to be adjusted when nested and/or shared RPs
# are in play.
res_req = ResourceRequest.from_request_spec(req_spec)
return res_req.merged_resources()
This bug is similar to https://bugs.launchpad.net/nova/+bug/2040449 and
https://bugs.launchpad.net/nova/+bug/2007697.
[1] https://review.opendev.org/c/openstack/nova/+/924110
[2] https://github.com/openstack/nova/blob/1acaf899a6964484e5b5be4337618ebbe6ca8dbb/nova/scheduler/utils.py#L653-L664
** Affects: nova
Importance: Undecided
Status: New
** Tags: quotas
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2088831
Title:
Server create with hw:mem_encryption fails with unified limits quotas
Status in OpenStack Compute (nova):
New
Bug description:
I discovered this issue while working on adding flavor scanning to
'nova-manage limits migrate_to_unified_limits' [1].
When unified limits is enabled ([quota]driver =
nova.quota.UnifiedLimitsDriver), the quota checking code calls
nova.scheduler.utils.resources_for_limits(flavor, is_bfv) to obtain a
list of resource classes for which quota limits need to be enforced.
This check however fails when the following exception is raised and
the API returns a 400 status code:
{"badRequest": {"code": 400, "message": "Memory encryption requested
by hw:mem_encryption extra spec in zqdxonddgkblwxarzpjj flavor but
image None doesn't have 'hw_firmware_type' property set to 'uefi' or
volume-backed instance was requested"}}
This is due to reuse of the
nova.virt.hardware.get_mem_encryption_constraint(flavor, image) method
in nova.scheduler.utils. In any case where the image metadata is not
properly populated in the request spec passed to
ResourceRequest.from_request_spec(request_spec),
nova.virt.hardware.get_mem_encryption_constraint(flavor, image) will
raise a FlavorImageConflict exception and cause things to fail.
Example from nova/scheduler/utils.py [2] where a "fake" RequestSpec is
used:
def _get_resources(flavor, is_bfv):
# create a fake RequestSpec as a wrapper to the caller
req_spec = objects.RequestSpec(flavor=flavor, is_bfv=is_bfv)
# TODO(efried): This method is currently only used from places that
# assume the compute node is the only resource provider. So for now, we
# just merge together all the resources specified in the flavor and pass
# them along. This will need to be adjusted when nested and/or shared RPs
# are in play.
res_req = ResourceRequest.from_request_spec(req_spec)
return res_req.merged_resources()
This bug is similar to https://bugs.launchpad.net/nova/+bug/2040449
and https://bugs.launchpad.net/nova/+bug/2007697.
[1] https://review.opendev.org/c/openstack/nova/+/924110
[2] https://github.com/openstack/nova/blob/1acaf899a6964484e5b5be4337618ebbe6ca8dbb/nova/scheduler/utils.py#L653-L664
To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/2088831/+subscriptions