yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #85028
[Bug 1882804] Re: RFE: allow replacing the QoS policy of bound port
** Changed in: neutron
Status: Confirmed => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1882804
Title:
RFE: allow replacing the QoS policy of bound port
Status in neutron:
Fix Released
Bug description:
Problem
=======
Neutron and Nova support creating servers with port having QoS minimum
bandwidth policy rule. Such server create request results in bandwidth
resource allocation in placement and this way ensures that the server
is placed to a compute host where enough bandwidth is available.
However the use case to change the minimum bandwidth guarantee of a
bound port hasn't been supported yet.
The end user can update the bound port to point to a different QoS
policy and Neutron will reflect such change in the resource_request of
the port, but the bandwidth allocation in placement has not been
updated.
Step to reproduce
=================
# Set up net, subnet, QoS policies with different min bw rules
# and a port with the first policy (qp0)
#
openstack network create net0 \
--provider-network-type vlan \
--provider-physical-network physnet0 \
--provider-segment 100 \
##
openstack subnet create subnet0 \
--network net0 \
--subnet-range 10.0.4.0/24 \
##
openstack network qos policy create qp0
openstack network qos rule create qp0 \
--type minimum-bandwidth \
--min-kbps 1000 \
--egress \
##
openstack network qos rule create qp0 \
--type minimum-bandwidth \
--min-kbps 1000 \
--ingress \
##
openstack port create port-normal-qos \
--network net0 \
--vnic-type normal \
--qos-policy qp0 \
##
openstack network qos policy create qp1
openstack network qos rule create qp1 \
--type minimum-bandwidth \
--min-kbps 2000 \
--egress \
##
openstack network qos rule create qp1 \
--type minimum-bandwidth \
--min-kbps 2000 \
--ingress \
##
# Create a nova server with the port and check the resource_request
# of the port and the resulting allocation in placement
#
openstack --os-compute-api-version 2.72 \
server create vm1 \
--flavor c1 \
--image cirros-0.4.0-x86_64-disk \
--nic port-id=port-normal-qos \
--wait \
##
openstack port show port-normal-qos \
-f table \
-c binding_profile -c resource_request -c status -c device_owner \
##
openstack --os-placement-api-version 1.30 \
resource provider usage show 1110cf59-cabf-526c-bacc-08baabbac692 \
##
# Change the QoS policy from qp0 to qp1 on the bound port
#
openstack port set port-normal-qos \
--qos-policy qp1 \
##
# The resource request of the port is updated according to qp1
#
openstack port show port-normal-qos \
-f table \
-c binding_profile -c resource_request -c status \
-c device_owner -c device_id \
##
# But the resource allocation in placement is not changed
# according to qp1
openstack --os-placement-api-version 1.30 \
resource provider usage show 1110cf59-cabf-526c-bacc-08baabbac692 \
##
Proposed Solution
=================
This use case was discussed on the Neutron-Nova cross project session
of the Victoria PTG [1]. There the result was to try to implement the
placement update on the Neutron side. This could be achieved by the
following high level sequence
1) Neutron receives the PUT /ports/{port_id} API request for a bound
port where the qos_policy_id is requested to be changed
2) Neutron calculates the difference in the resource_request of the
port due to the requested policy change. If no there is no change then
port update continues as today. If there is a change then
3) Neutron calls the Placement GET /allocations/{consumer_uuid} API[2]
where the consumer_id is the device_id of the port. In the Placement
response Neutron updates the requested resources under the
"allocations".{rp_uuid} key where the rp_uuid is the value of the
binding:profile["allocation"] key of the port. The resources
subdictionary is updated by applying the resource request difference
calculated at #2). The rest of the structure is left unchanged
including the generation keys and the consumer_generation key.
4) Neutron calls the Placement PUT /allocations/{consumer_uuid} API[3]
with the updated allocation structure.
4a) If Placement returns HTTP 200 then port update continues as today.
4b) If Placement returns HTTP 409 with error code
"placement.concurrent_update" then Neutron needs to repeat step #3)
and #4)
4c) If Placement returns HTTP 409 with other error code then there is
not enough bandwidth resource available on the device and therefore
the QoS policy change should be rejected.
Notes:
* use Placement API version 1.28 or higher as there the GET response and PUT request has a symmetric structure
* ports that are not bound can be skipped as they will not have
resource allocation in placement
* if the Nova server is deleted during the QoS policy update then
Neutron might query placement with a non-existing consumer_uuid at
#3). Placement will return an empty allocation in this case and
Neutron needs to fail during #3) when trying to modify the received
allocation locally as Placement will blindly allow PUT-ing back a new
allocation at #4 without any error resulting in a resource leak.
* A single port only allocates from one resource provider but multiple
ports bound to the same Nova server might allocate from the same
resource provider. So it is important to apply the resource request
difference at #3) instead of copying the new resource request of the
port into the allocation.
[1] https://etherpad.opendev.org/p/nova-victoria-ptg
[2] https://docs.openstack.org/api-ref/placement/?expanded=list-allocations-detail#list-allocations
[3] https://docs.openstack.org/api-ref/placement/?expanded=list-resource-provider-allocations-detail#list-resource-provider-allocations
To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1882804/+subscriptions
References