← Back to team overview

kernel-packages team mailing list archive

[Bug 1588449] Re: NVMe max_segments queue parameter gets set to 1


This bug is awaiting verification that the kernel in -proposed solves
the problem. Please test the kernel and update this bug with the
results. If the problem is solved, change the tag 'verification-needed-
xenial' to 'verification-done-xenial'.

If verification is not done by 5 working days from today, this fix will
be dropped from the source code, and this bug will be closed.

See https://wiki.ubuntu.com/Testing/EnableProposed for documentation how
to enable and use -proposed. Thank you!

** Tags added: verification-needed-xenial

You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.

  NVMe max_segments queue parameter gets set to 1

Status in linux package in Ubuntu:
  Fix Released
Status in linux source package in Xenial:
  Fix Committed
Status in linux source package in Yakkety:
  Fix Released

Bug description:
  == Comment: #0 - Heitor Ricardo Alves de Siqueira - 2016-06-02 10:41:18 ==
  There are some upstream patches missing from the 16.04 nvme driver, and this limits adapter performance. We need to include these so that NVMe devices are correctly set up.

  I would like to ask Canonical to cherry pick the following patches for the 16.04 kernel:
      * da35825d9a09 ("nvme: set queue limits for the admin queue")
      * 45686b6198bd ("nvme: fix max_segments integer truncation")
      * f21018427cb0 ("block: fix blk_rq_get_max_sectors for driver private requests")
  ---uname output---
  Linux ubuntu 4.4.0-22-generic #40-Ubuntu SMP Thu May 12 22:03:35 UTC 2016 ppc64le ppc64le ppc64le GNU/Linux
  ---Steps to Reproduce---
  Boot the system with an NVMe adapter connected, and verify the queue parameters:

  root@ubuntu:~# ./queue.sh nvme0n1
  /sys/block/nvme0n1/queue/add_random = 0
  /sys/block/nvme0n1/queue/discard_granularity = 4096
  /sys/block/nvme0n1/queue/discard_max_bytes = 2199023255040
  /sys/block/nvme0n1/queue/discard_max_hw_bytes = 4294966784
  /sys/block/nvme0n1/queue/discard_zeroes_data = 0
  /sys/block/nvme0n1/queue/hw_sector_size = 4096
  /sys/block/nvme0n1/queue/io_poll = 0
  /sys/block/nvme0n1/queue/iostats = 1
  /sys/block/nvme0n1/queue/logical_block_size = 4096
  /sys/block/nvme0n1/queue/max_hw_sectors_kb = 2147483647
  /sys/block/nvme0n1/queue/max_integrity_segments = 0
  /sys/block/nvme0n1/queue/max_sectors_kb = 1280
  /sys/block/nvme0n1/queue/max_segments = 1    <---------------------
  /sys/block/nvme0n1/queue/max_segment_size = 65536
  /sys/block/nvme0n1/queue/minimum_io_size = 4096
  /sys/block/nvme0n1/queue/nomerges = 2
  /sys/block/nvme0n1/queue/nr_requests = 1023
  /sys/block/nvme0n1/queue/optimal_io_size = 0
  /sys/block/nvme0n1/queue/physical_block_size = 4096
  /sys/block/nvme0n1/queue/read_ahead_kb = 128
  /sys/block/nvme0n1/queue/rotational = 0
  /sys/block/nvme0n1/queue/rq_affinity = 1
  /sys/block/nvme0n1/queue/scheduler = none
  /sys/block/nvme0n1/queue/write_same_max_bytes = 0

  We should have max_segments set to 65535 by default.

To manage notifications about this bug go to: