yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #77652
[Bug 1813703] Re: [L2] [summary] ovs-agent issues at large scale
Reviewed: https://review.openstack.org/640797
Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=a5244d6d44d2b66de27dc77efa7830fa657260be
Submitter: Zuul
Branch: master
commit a5244d6d44d2b66de27dc77efa7830fa657260be
Author: LIU Yulong <i@xxxxxxxxxxxx>
Date: Mon Mar 4 21:17:20 2019 +0800
More accurate agent restart state transfer
Ovs-agent can be very time-consuming in handling a large number
of ports. At this point, the ovs-agent status report may have
exceeded the set timeout value. Some flows updating operations
will not be triggerred. This results in flows loss during agent
restart, especially for hosts to hosts of vxlan tunnel flow.
This fix will let the ovs-agent explicitly, in the first rpc loop,
indicate that the status is restarted. Then l2pop will be required
to update fdb entries.
Closes-Bug: #1813703
Closes-Bug: #1813714
Closes-Bug: #1813715
Closes-Bug: #1794991
Closes-Bug: #1799178
Change-Id: I8edc2deb509216add1fb21e1893f1c17dda80961
** Changed in: neutron
Status: In Progress => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1813703
Title:
[L2] [summary] ovs-agent issues at large scale
Status in neutron:
Fix Released
Bug description:
[L2] [summary] ovs-agent issues at large scale
Recently we have tested the ovs-agent with the openvswitch flow based
security group, and we met some issues at large scale. This bug will
give us a centralized location to track the following problems.
Problems:
(1) RPC timeout during ovs-agent restart
https://bugs.launchpad.net/neutron/+bug/1813704
(2) local connection to ovs-vswitchd was drop or timeout
https://bugs.launchpad.net/neutron/+bug/1813705
(3) ovs-agent failed to restart
https://bugs.launchpad.net/neutron/+bug/1813706
(4) ovs-agent restart costs too long time (15-40mins+)
https://bugs.launchpad.net/neutron/+bug/1813707
(5) unexpected flow lost
https://bugs.launchpad.net/neutron/+bug/1813714
(6) unexpected tunnel lost
https://bugs.launchpad.net/neutron/+bug/1813715
(7) multipe cookies flows (stale flows)
https://bugs.launchpad.net/neutron/+bug/1813712
(8) dump-flows takes a lots of time
https://bugs.launchpad.net/neutron/+bug/1813709
(9) really hard to do trouble shooting if one VM lost the connection, flow tables are almost unreadable (reach 30k+ flows).
https://bugs.launchpad.net/neutron/+bug/1813708
Problem can be seen in the following scenarios:
(1) 2000-3000 ports related to one single security group (or one remote security group)
(2) create 2000-3000 VMs in one single subnet (network)
(3) create 2000-3000 VMs under one single security group
Yes, the scale is the main problem, when one host's VM count is
closing to 150-200 (at the same time the ports number in one subnet or
security group is closing 2000), the ovs-agent restart will get worse.
Test ENV:
stable/queens
Deployment topology:
neutron-server, database, message queue all have its own dedicated physical hosts, 3 nodes for each service at least.
Configurations:
ovs-agent was setup with l2pop, security group based on ovs flow, and the config was basiclly like the following:
[agent]
enable_distributed_routing = True
l2_population = True
tunnel_types = vxlan
arp_responder = True
prevent_arp_spoofing = True
extensions = qos
report_interval = 60
[ovs]
bridge_mappings = tenant:br-vlan,external:br-ex
local_ip = 10.114.4.48
[securitygroup]
firewall_driver = openvswitch
enable_security_group = True
Some issue tracking:
(1) mostly because the great number of ports related to one security grop or in one network
(2) uncessary RPC call during ovs-agent restart
(3) inefficient database query conditions
(4) full sync will redo again and again if any exception was raised in rpc_loop
(5) clean stale flows will dump all flows first (not once, multipe times), this is really time-consuming
So this is a summay bug for the entire scale issues we have met.
Some potential solutions:
Increase some config like rpc_response_timeout, of_connect_timeout, of_request_timeout, ovsdb_timeout etc,
does not help too much, and these changes can cause the restart cost time much more. And those issues can still be seen.
One workaround is to disable the openvswitch flow based security
group, the ovs-agent can restart in less than 10 mins.
To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1813703/+subscriptions
References