yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #80923
[Bug 1845161] Re: Neutron QoS Policy lost on interfaces
Reviewed: https://review.opendev.org/690098
Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=50ffa5173db03b0fd0fe7264e4b2a905753f86ec
Submitter: Zuul
Branch: master
commit 50ffa5173db03b0fd0fe7264e4b2a905753f86ec
Author: Rodolfo Alonso Hernandez <ralonsoh@xxxxxxxxxx>
Date: Tue Oct 22 14:21:08 2019 +0000
[OVS] Handle added/removed ports in the same polling iteration
The OVS agent processes the port events in a polling loop. It could
happen (and more frequently in a loaded OVS agent) that the "removed"
and "added" events can happen in the same polling iteration. Because
of this, the same port is detected as "removed" and "added".
When the virtual machine is restarted, the port event sequence is
"removed" and then "added". When both events are captured in the same
iteration, the port is already present in the bridge and the port is
discharted from the "removed" list.
Because the port was removed first and the added, the QoS policies do
not apply anymore (QoS and Queue registers, OF rules). If the QoS
policy does not change, the QoS agent driver will detect it and won't
call the QoS driver methods (based on the OVS agent QoS cache, storing
port and QoS rules). This will lead to an unconfigured port.
This patch solves this issue by detecting this double event and
registering it as "removed_and_added". When the "added" port is
handled, the QoS deletion method is called first (if needed) to remove
the unneded artifacts (OVS registers, OF rules) and remove the QoS
cache (port/QoS policy). Then the QoS policy is applied again on the
port.
NOTE: this is going to be quite difficult to be tested in a fullstack test.
Change-Id: I51eef168fa8c18a3e4cee57c9ff86046ea9203fd
Closes-Bug: #1845161
** Changed in: neutron
Status: In Progress => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1845161
Title:
Neutron QoS Policy lost on interfaces
Status in neutron:
Fix Released
Bug description:
Instance lost the QoS on interfaces with some operations like: reboot
hard, live-migrate, migrate
Description
===========
When perform some operation with VM like: reboot hard, live-migrate, migrate, sometime, the QoS policy on the VM interfaces lost and neutron doesn't handle to restore it
So user can avoid QoS per-port limitation and utilise all hosts
bandwidth.
Steps to reproduce
==================
1. Create instance with port in neutron network(or can create a network with a QoS Policy)
2. Create QoS in neutron:
$ openstack network qos policy create --share qos-100Mb
$ openstack network qos rule create --type bandwidth-limit --max-kbps 100000 --max-burst-kbits 0 --egress qos-100Mb
$ openstack network qos rule create --type bandwidth-limit --max-kbps 100000 --max-burst-kbits 0 --ingress qos-100Mb
3. Update port of the instance, assign policy:
$ openstack port set --qos-policy qos-100Mb PORT_UUID
4. Ensure, that QoS rule is applied to the port:
$ ovs-vsctl list interface qvoxxxxxxx-xx
......
ingress_policing_burst: 80000
ingress_policing_rate: 100000
.......
$ /sbin/tc -s qdisc show dev qvoxxxxxxx-xx
qdisc htb 1: root refcnt 2 r2q 10 default 1 direct_packets_stat 0 direct_qlen 1000
Sent 9701 bytes 93 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
qdisc ingress ffff: parent ffff:fff1 ----------------
Sent 9576 bytes 130 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
5. Perform operation like: reboot hard, live-migrate, migrate
6. Sometime after that operation, VM interfaces lost its QoS policy
ovs-vsctl list interface qvoxxxxxxx-xx
......
ingress_policing_burst: 0
ingress_policing_rate: 0
.......
Expected result
===============
QoS rules are restore to the port
Actual result
=============
QoS rules are lost, port has no limit
Environment
===========
1. Exact version of OpenStack:
OpenStack Queens and Openstack Rocky
2. Which networking type did you use?
Neutron with Open vSwitch
To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1845161/+subscriptions
References