kernel-packages team mailing list archive
-
kernel-packages team
-
Mailing list archive
-
Message #89389
[Bug 1391339] Re: Trusty kernel inbound network performance regression when GRO is enabled
Hi Rodrigo, there is a possibility that the problem is not a regression
in handling GRO but actually supporting GRO. I can find the following
commit in 3.13-rc1:
commit 99d3d587b2b4314ccc8ea066cb327dfb523d598e
Author: Wei Liu <wei.liu2@xxxxxxxxxx>
Date: Mon Sep 30 13:46:34 2013 +0100
xen-netfront: convert to GRO API
Now I tried to reproduce the performance issue locally and using a
Xen-4.4.1 Trusty host which runs a Trusty PV guest. On the guest side I
start iperf in server mode (since that is the receive side, but to be
sure I reversed the setup with the same results) and on a desktop
running Trusty, I start iperf in client mode connecting to the PV guest.
The desktop and host have 1Gbit NICs. With that I get an average of
about 850Mbit/sec over 10 runs. Which is as good as I would expect it.
And this does not change significantly whether I enable or disable GRO.
Now what we do not know is what gets involved network-wise between your
server and the guests. Not really sure how this would possibly happen.
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1391339
Title:
Trusty kernel inbound network performance regression when GRO is
enabled
Status in “linux” package in Ubuntu:
Confirmed
Bug description:
After upgrading our EC2 instances from Lucid to Trusty we noticed an
increase on download times, Lucid instances were able to download
twice as fast as Trusty. After some investigation and testing older
kernels (precise, raring and saucy) we confirmed that this only
happens on trusty kernel or newer since utopic kernel shows the same
result and disabling gro with `ethtool -K eth0 gro off` seems to fix
the problem making download speed the same as the Lucid instances
again.
The problem is easily reproducible using Apache Bench a couple times
on files bigger than 100MB on 1Gb network (EC2) using HTTP or HTTPS.
Following is an example of download throughput with and without gro:
root@runtime-common.22 ~# ethtool -K eth0 gro off
root@runtime-common.22 ~# for i in {1..10}; do ab -n 10 $URL | grep "Transfer rate"; done
Transfer rate: 85183.40 [Kbytes/sec] received
Transfer rate: 86375.80 [Kbytes/sec] received
Transfer rate: 94720.24 [Kbytes/sec] received
Transfer rate: 84783.82 [Kbytes/sec] received
Transfer rate: 84933.09 [Kbytes/sec] received
Transfer rate: 84714.04 [Kbytes/sec] received
Transfer rate: 84795.58 [Kbytes/sec] received
Transfer rate: 84636.54 [Kbytes/sec] received
Transfer rate: 84924.26 [Kbytes/sec] received
Transfer rate: 84994.10 [Kbytes/sec] received
root@runtime-common.22 ~# ethtool -K eth0 gro on
root@runtime-common.22 ~# for i in {1..10}; do ab -n 10 $URL | grep "Transfer rate"; done
Transfer rate: 74193.53 [Kbytes/sec] received
Transfer rate: 56808.91 [Kbytes/sec] received
Transfer rate: 56011.58 [Kbytes/sec] received
Transfer rate: 82227.74 [Kbytes/sec] received
Transfer rate: 70806.54 [Kbytes/sec] received
Transfer rate: 72848.10 [Kbytes/sec] received
Transfer rate: 58451.94 [Kbytes/sec] received
Transfer rate: 61221.33 [Kbytes/sec] received
Transfer rate: 58620.21 [Kbytes/sec] received
Transfer rate: 69950.03 [Kbytes/sec] received
root@runtime-common.22 ~#
Similar results can be observed using iperf and netperf as well.
Tested kernels:
Not affected: 3.8.0-44-generic (precise/raring), 3.11.0-26-generic (saucy)
Affected: 3.13.0-39-generic (trusty), 3.16.0-24-generic (utopic)
Let me know if I can provide any other information that might be helpful like perf traces and reports.
Rodrigo.
To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1391339/+subscriptions
References