kernel-packages team mailing list archive
-
kernel-packages team
-
Mailing list archive
-
Message #57322
[Bug 1201869] Re: poor networking throughput through veth interfaces
This bug was fixed in the package linux - 3.2.0-61.92
---------------
linux (3.2.0-61.92) precise; urgency=low
[ Kamal Mostafa ]
* Release Tracking Bug
- LP: #1300455
[ Upstream Kernel Changes ]
* cifs: set MAY_SIGN when sec=krb5
- LP: #1285723
* veth: reduce stat overhead
- LP: #1201869
* veth: extend device features
- LP: #1201869
* veth: avoid a NULL deref in veth_stats_one
- LP: #1201869
* veth: fix a NULL deref in netif_carrier_off
- LP: #1201869
* veth: fix NULL dereference in veth_dellink()
- LP: #1201869
* ioat: fix tasklet tear down
- LP: #1291113
-- Kamal Mostafa <kamal@xxxxxxxxxxxxx> Mon, 31 Mar 2014 14:33:18 -0700
** Changed in: linux (Ubuntu Precise)
Status: Fix Committed => Fix Released
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1201869
Title:
poor networking throughput through veth interfaces
Status in “linux” package in Ubuntu:
Fix Released
Status in “linux” source package in Precise:
Fix Released
Status in “linux” source package in Quantal:
Fix Released
Status in “linux” source package in Raring:
Fix Released
Bug description:
SRU Justification:
Impact:
Users of the 3.2/3.5/3.8 series kernel will have poor network throughput when using OpenStack Neutron depending on their setup.
Fix:
These upstream patches are necessary to fix the issue:
2681128f0ced8aa4e66f221197e183cc16d244fe
8093315a91340bca52549044975d8c7f673b28a1
d0e2c55e7c940a3ee91e9e23a2683b593690f1e9
2efd32ee1b60b0b31404ca47c1ce70e5a5d24ebc
f45a5c267da35174e22cec955093a7513dc1623d
Testcase:
Setup OpenStack Neutron. Test throughput between internal and external nodes.
The following explains an example vlan+namespace configuration:
Internal Node: [10.x.x.2]->eth2.123->br123->tap123->qr-123[10.x.x.1] <--- netns: qrouter-123
netns: qrouter-123 ---> qg-234[10.x.y.1]->tap234->br234->eth2.234->External Node[10.x.y.2]
Where:
1) tap123+qr-123 and tap234+qg-234 are veth pairs
2) qr-123 and qg-234 reside inside the qrouter-123 namespace
Another testcase without Openstack:
* create two vms: (vm1, vm2), install iperf on those machines
* connect vms via an isolated bridge
* measure baseline performance
- iperf -s # on machine 1
- iperf -t 60 -l 4M -c <machine 1 IP> # on machine 2
* create veth pairs between vms using attached script:
- ./setup-lp1201869.sh vm1
- ./setup-lp1201869.sh vm2
* attach VM's interfaces to the created bridges (qbrvm1 / qbrvm2)
* In the VM's setup static IPs
- sudo ifconfig eth0 10.10.10.1/24 up #vm1
- sudo ifconfig eth0 10.10.10.2/24 up #vm2
* measure performance now
* we expect this to be close to the existing performance
NOTE: the fixed kernel needs to be on the _hypervisor_
--
OpenStack Neutron does IP forwarding through a network namespace. A
veth pair is used to connect into the namespace. The veth pair appears
to be the bottleneck, independent of network namespace. In newer
versions of Linux (Ubuntu-3.9.0-7.15 / v3.9-rc1 and greater)
throughput is much higher by almost 3 times. For example with some
testing throughput is 3.5 Gbps in pre 3.9-rc1 versions and 9.1 Gbps
with these patches applied.
This has been confirmed on kernels from 3.5.x-3.8.x. (Quantal and
Raring lts backports)
To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1201869/+subscriptions
References