← Back to team overview

yahoo-eng-team team mailing list archive

[Bug 1384579] [NEW] Slow network outbound throughput with tso enabled in veth pair with newer kernels

 

Public bug reported:

Hi all,

first of, this isn't really a bug report but a request for a workaround.

With (obviously) kernels below 3.8 and OpenVSwitch < 2.0.0 the tcp
segmentaion offload was off by default:

$ ethtool -k qvb056d020b-d6
Offload parameters for qvb056d020b-d6:
rx-checksumming: off
tx-checksumming: off
scatter-gather: off
tcp-segmentation-offload: off
udp-fragmentation-offload: off
generic-segmentation-offload: off
generic-receive-offload: on
large-receive-offload: off
rx-vlan-offload: off
tx-vlan-offload: off
ntuple-filters: off
receive-hashing: off

With newer kernels (3.16.3-200.fc20.x86_64 in our case) tso is on by default which leads to slow outbound speed:
Features for qvb5979dad2-b3:
rx-checksumming: on
tx-checksumming: on
	tx-checksum-ipv4: off [fixed]
	tx-checksum-ip-generic: on
	tx-checksum-ipv6: off [fixed]
	tx-checksum-fcoe-crc: off [fixed]
	tx-checksum-sctp: off [fixed]
scatter-gather: on
	tx-scatter-gather: on
	tx-scatter-gather-fraglist: on
tcp-segmentation-offload: on
	tx-tcp-segmentation: on
	tx-tcp-ecn-segmentation: on
	tx-tcp6-segmentation: on
udp-fragmentation-offload: on
generic-segmentation-offload: on
generic-receive-offload: on
large-receive-offload: off [fixed]
rx-vlan-offload: on
tx-vlan-offload: on
ntuple-filters: off [fixed]
receive-hashing: off [fixed]
highdma: on
rx-vlan-filter: off [fixed]
vlan-challenged: off [fixed]
tx-lockless: on [fixed]
netns-local: off [fixed]
tx-gso-robust: off [fixed]
tx-fcoe-segmentation: off [fixed]
tx-gre-segmentation: on
tx-ipip-segmentation: on
tx-sit-segmentation: on
tx-udp_tnl-segmentation: on
tx-mpls-segmentation: off [fixed]
fcoe-mtu: off [fixed]
tx-nocache-copy: off
loopback: off [fixed]
rx-fcs: off [fixed]
rx-all: off [fixed]
tx-vlan-stag-hw-insert: on
rx-vlan-stag-hw-parse: on
rx-vlan-stag-filter: off [fixed]
l2-fwd-offload: off [fixed]
busy-poll: off [fixed]

Another workaround would be to disable tso in every VM which is using
the virtio net driver which is often impossible since VMs are managed by
customers.

After creation of the veth pair a ethtool -K qvbXXX tso off should be
triggered.

Relevant used versions:
kernel-3.16.6-200.fc20.x86_64
openvswitch-2.3.0-1.fc20.x86_64
qemu-system-x86-1.6.2-8.fc20.x86_64

** Affects: nova
     Importance: Undecided
         Status: New


** Tags: network tso

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1384579

Title:
  Slow network outbound throughput with tso enabled in veth pair with
  newer kernels

Status in OpenStack Compute (Nova):
  New

Bug description:
  Hi all,

  first of, this isn't really a bug report but a request for a
  workaround.

  With (obviously) kernels below 3.8 and OpenVSwitch < 2.0.0 the tcp
  segmentaion offload was off by default:

  $ ethtool -k qvb056d020b-d6
  Offload parameters for qvb056d020b-d6:
  rx-checksumming: off
  tx-checksumming: off
  scatter-gather: off
  tcp-segmentation-offload: off
  udp-fragmentation-offload: off
  generic-segmentation-offload: off
  generic-receive-offload: on
  large-receive-offload: off
  rx-vlan-offload: off
  tx-vlan-offload: off
  ntuple-filters: off
  receive-hashing: off

  With newer kernels (3.16.3-200.fc20.x86_64 in our case) tso is on by default which leads to slow outbound speed:
  Features for qvb5979dad2-b3:
  rx-checksumming: on
  tx-checksumming: on
  	tx-checksum-ipv4: off [fixed]
  	tx-checksum-ip-generic: on
  	tx-checksum-ipv6: off [fixed]
  	tx-checksum-fcoe-crc: off [fixed]
  	tx-checksum-sctp: off [fixed]
  scatter-gather: on
  	tx-scatter-gather: on
  	tx-scatter-gather-fraglist: on
  tcp-segmentation-offload: on
  	tx-tcp-segmentation: on
  	tx-tcp-ecn-segmentation: on
  	tx-tcp6-segmentation: on
  udp-fragmentation-offload: on
  generic-segmentation-offload: on
  generic-receive-offload: on
  large-receive-offload: off [fixed]
  rx-vlan-offload: on
  tx-vlan-offload: on
  ntuple-filters: off [fixed]
  receive-hashing: off [fixed]
  highdma: on
  rx-vlan-filter: off [fixed]
  vlan-challenged: off [fixed]
  tx-lockless: on [fixed]
  netns-local: off [fixed]
  tx-gso-robust: off [fixed]
  tx-fcoe-segmentation: off [fixed]
  tx-gre-segmentation: on
  tx-ipip-segmentation: on
  tx-sit-segmentation: on
  tx-udp_tnl-segmentation: on
  tx-mpls-segmentation: off [fixed]
  fcoe-mtu: off [fixed]
  tx-nocache-copy: off
  loopback: off [fixed]
  rx-fcs: off [fixed]
  rx-all: off [fixed]
  tx-vlan-stag-hw-insert: on
  rx-vlan-stag-hw-parse: on
  rx-vlan-stag-filter: off [fixed]
  l2-fwd-offload: off [fixed]
  busy-poll: off [fixed]

  Another workaround would be to disable tso in every VM which is using
  the virtio net driver which is often impossible since VMs are managed
  by customers.

  After creation of the veth pair a ethtool -K qvbXXX tso off should be
  triggered.

  Relevant used versions:
  kernel-3.16.6-200.fc20.x86_64
  openvswitch-2.3.0-1.fc20.x86_64
  qemu-system-x86-1.6.2-8.fc20.x86_64

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1384579/+subscriptions


Follow ups

References