yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #85702
[Bug 1922237] [NEW] [RFE][QoS] Add minimum guaranteed packet rate QoS rule
Public bug reported:
Quote from "[RFE] [QoS] add qos rule type packet per second (pps)" [1]:
For cloud providers, to limit the packet per second (pps) of VM NIC is
popular and sometimes essential. Transit large set packets for VM in
physical compute hosts will consume the CPU/phy-nic performance. And for
small packets, even the bandwidth is low, the pps can still be higher,
which can be an attact point inside the cloud while some VMs are
becoming hacked.
--
Neutron already supports bandwidth_limit and minimum_bandwidth QoS
rules.
So [1] proposes a packet rate limit QoS rule. With similar reasoning and
aligning with the existing bandwidth rules providing minimum packet rate
QoS rule could also make sense.
The new minimum_packet_rate rule has a similar structure and semantic as
the minimum_bandwidth rule:
* It defines a guaranteed minimum packet rate in kpps
* It defines a direction (egress / ingress) to which direction the
guarantee is applied.
E.g.:
POST /v2.0/qos/policies/{policy_id}/minimum_packet_rate_rules
{
"minimum_packet_rate_rule": {
"min_kpps": 10000,
"direction": "egress"
}
}
* Ports with such QoS rule expected to be scheduled on compute nodes
where the networking backend (typically OVS) still has enough packet
processing capacity to fulfill the guarantee.
This RFE is focusing on supporting min guaranteed packet rate for OVS
network backend only.
A new config option is introduced in the OVS agent configuration where
the admin can define the available packet processing capacity of the OVS
deployed on the given compute host. This information is sent to the
neutron server in the agent hearth beat. The neutron server uses this
information to create NET_KILOPACKET_PER_SEC resource inventory on the
OVS Agent RP in Placement.
Note that the resource inventory is directionless, while the proposed
QoS rule has direction. From scheduling perspective the direction of the
QoS rule does not matter and both direction will be counted against the
single directionless resource inventory. However for data plane
enforcement directions should be handled separately.
Note that while bandwidth inventory is define per bridge / physical
device the packet processing capacity is applied globally to the whole
OVS instance.
A port that has such QoS policy rule needs to express the related
NET_KILOPACKET_PER_SEC resource request in the port.resource_request
attribute. As a single port can have both bandwidth and packet rate QoS
applied and because bandwidth is allocated from the bridge / physical
device while the packet rate is allocated from the whole OVS instance
the two sets of resources need to be requested separately. (A deeper
technical reason to this is that a single resource request group is
always allocated from a single resource provider in Placement. So if
bandwidth and packet rate needs to be allocated from different providers
then they should be requested in different resource request groups.) To
accommodate such separation the structure of the resource_request field
of the neutron port needs to be changed from:
"resource_request":
{
"required": ["CUSTOM_PHYSNET_PUBLIC", "CUSTOM_VNIC_TYPE_NORMAL"],
"resources": {"NET_BW_EGR_KILOBIT_PER_SEC": 1000}
},
to:
"resource_request":
{
{
"name": <some port unique name, e.g. the policy rule id that requesting the resource>
"required": [],
"resources": {"NET_KILOPACKET_PER_SEC": 1000}
},
{
"name": <some port unique name, e.g. the policy rule id that requesting the resource>
"required": ["CUSTOM_PHYSNET_PUBLIC", "CUSTOM_VNIC_TYPE_NORMAL"],
"resources": {"NET_BW_EGR_KILOBIT_PER_SEC": 1000}
},
},
As a consequence the port bindig:profile.allocation key needs to be change too. Today it contains the single UUID of the resource provider the port's resources are allocated from. Now that a port can allocate from multiple providers this key needs to be transformed to a dict where the resource provider UUIDs are keyed by the resource_request.name value.
Enforcing the packet rate guarantees on the data plane is out of scope
of this RFE. In the future a basic guarantee can be provided in the
networking backend by re-using the data plane enforcement implementation
of the packet_rate_limit rule from [1].
I will propose a nova-spec under the bp[2] that will define both the
high level solution and the details of the nova impact. Also I will
propose a neutron-spec defining the detailed impact on neutron.
[1] https://bugs.launchpad.net/neutron/+bug/1912460
[2] https://blueprints.launchpad.net/nova/+spec/qos-minimum-guaranteed-packet-rate
** Affects: neutron
Importance: Undecided
Status: New
** Tags: rfe
** Tags added: rfe
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1922237
Title:
[RFE][QoS] Add minimum guaranteed packet rate QoS rule
Status in neutron:
New
Bug description:
Quote from "[RFE] [QoS] add qos rule type packet per second (pps)"
[1]:
For cloud providers, to limit the packet per second (pps) of VM NIC is
popular and sometimes essential. Transit large set packets for VM in
physical compute hosts will consume the CPU/phy-nic performance. And
for small packets, even the bandwidth is low, the pps can still be
higher, which can be an attact point inside the cloud while some VMs
are becoming hacked.
--
Neutron already supports bandwidth_limit and minimum_bandwidth QoS
rules.
So [1] proposes a packet rate limit QoS rule. With similar reasoning
and aligning with the existing bandwidth rules providing minimum
packet rate QoS rule could also make sense.
The new minimum_packet_rate rule has a similar structure and semantic
as the minimum_bandwidth rule:
* It defines a guaranteed minimum packet rate in kpps
* It defines a direction (egress / ingress) to which direction the
guarantee is applied.
E.g.:
POST /v2.0/qos/policies/{policy_id}/minimum_packet_rate_rules
{
"minimum_packet_rate_rule": {
"min_kpps": 10000,
"direction": "egress"
}
}
* Ports with such QoS rule expected to be scheduled on compute nodes
where the networking backend (typically OVS) still has enough packet
processing capacity to fulfill the guarantee.
This RFE is focusing on supporting min guaranteed packet rate for OVS
network backend only.
A new config option is introduced in the OVS agent configuration where
the admin can define the available packet processing capacity of the
OVS deployed on the given compute host. This information is sent to
the neutron server in the agent hearth beat. The neutron server uses
this information to create NET_KILOPACKET_PER_SEC resource inventory
on the OVS Agent RP in Placement.
Note that the resource inventory is directionless, while the proposed
QoS rule has direction. From scheduling perspective the direction of
the QoS rule does not matter and both direction will be counted
against the single directionless resource inventory. However for data
plane enforcement directions should be handled separately.
Note that while bandwidth inventory is define per bridge / physical
device the packet processing capacity is applied globally to the whole
OVS instance.
A port that has such QoS policy rule needs to express the related
NET_KILOPACKET_PER_SEC resource request in the port.resource_request
attribute. As a single port can have both bandwidth and packet rate
QoS applied and because bandwidth is allocated from the bridge /
physical device while the packet rate is allocated from the whole OVS
instance the two sets of resources need to be requested separately. (A
deeper technical reason to this is that a single resource request
group is always allocated from a single resource provider in
Placement. So if bandwidth and packet rate needs to be allocated from
different providers then they should be requested in different
resource request groups.) To accommodate such separation the structure
of the resource_request field of the neutron port needs to be changed
from:
"resource_request":
{
"required": ["CUSTOM_PHYSNET_PUBLIC", "CUSTOM_VNIC_TYPE_NORMAL"],
"resources": {"NET_BW_EGR_KILOBIT_PER_SEC": 1000}
},
to:
"resource_request":
{
{
"name": <some port unique name, e.g. the policy rule id that requesting the resource>
"required": [],
"resources": {"NET_KILOPACKET_PER_SEC": 1000}
},
{
"name": <some port unique name, e.g. the policy rule id that requesting the resource>
"required": ["CUSTOM_PHYSNET_PUBLIC", "CUSTOM_VNIC_TYPE_NORMAL"],
"resources": {"NET_BW_EGR_KILOBIT_PER_SEC": 1000}
},
},
As a consequence the port bindig:profile.allocation key needs to be change too. Today it contains the single UUID of the resource provider the port's resources are allocated from. Now that a port can allocate from multiple providers this key needs to be transformed to a dict where the resource provider UUIDs are keyed by the resource_request.name value.
Enforcing the packet rate guarantees on the data plane is out of scope
of this RFE. In the future a basic guarantee can be provided in the
networking backend by re-using the data plane enforcement
implementation of the packet_rate_limit rule from [1].
I will propose a nova-spec under the bp[2] that will define both the
high level solution and the details of the nova impact. Also I will
propose a neutron-spec defining the detailed impact on neutron.
[1] https://bugs.launchpad.net/neutron/+bug/1912460
[2] https://blueprints.launchpad.net/nova/+spec/qos-minimum-guaranteed-packet-rate
To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1922237/+subscriptions