← Back to team overview

kernel-packages team mailing list archive

[Bug 1588449] [NEW] NVMe max_segments queue parameter gets set to 1

 

Public bug reported:

== Comment: #0 - Heitor Ricardo Alves de Siqueira - 2016-06-02 10:41:18 ==
There are some upstream patches missing from the 16.04 nvme driver, and this limits adapter performance. We need to include these so that NVMe devices are correctly set up.

I would like to ask Canonical to cherry pick the following patches for the 16.04 kernel:
    * da35825d9a09 ("nvme: set queue limits for the admin queue")
    * 45686b6198bd ("nvme: fix max_segments integer truncation")
    * f21018427cb0 ("block: fix blk_rq_get_max_sectors for driver private requests")
 
---uname output---
Linux ubuntu 4.4.0-22-generic #40-Ubuntu SMP Thu May 12 22:03:35 UTC 2016 ppc64le ppc64le ppc64le GNU/Linux
 
---Steps to Reproduce---
Boot the system with an NVMe adapter connected, and verify the queue parameters:

root@ubuntu:~# ./queue.sh nvme0n1
/sys/block/nvme0n1/queue/add_random = 0
/sys/block/nvme0n1/queue/discard_granularity = 4096
/sys/block/nvme0n1/queue/discard_max_bytes = 2199023255040
/sys/block/nvme0n1/queue/discard_max_hw_bytes = 4294966784
/sys/block/nvme0n1/queue/discard_zeroes_data = 0
/sys/block/nvme0n1/queue/hw_sector_size = 4096
/sys/block/nvme0n1/queue/io_poll = 0
/sys/block/nvme0n1/queue/iostats = 1
/sys/block/nvme0n1/queue/logical_block_size = 4096
/sys/block/nvme0n1/queue/max_hw_sectors_kb = 2147483647
/sys/block/nvme0n1/queue/max_integrity_segments = 0
/sys/block/nvme0n1/queue/max_sectors_kb = 1280
/sys/block/nvme0n1/queue/max_segments = 1    <---------------------
/sys/block/nvme0n1/queue/max_segment_size = 65536
/sys/block/nvme0n1/queue/minimum_io_size = 4096
/sys/block/nvme0n1/queue/nomerges = 2
/sys/block/nvme0n1/queue/nr_requests = 1023
/sys/block/nvme0n1/queue/optimal_io_size = 0
/sys/block/nvme0n1/queue/physical_block_size = 4096
/sys/block/nvme0n1/queue/read_ahead_kb = 128
/sys/block/nvme0n1/queue/rotational = 0
/sys/block/nvme0n1/queue/rq_affinity = 1
/sys/block/nvme0n1/queue/scheduler = none
/sys/block/nvme0n1/queue/write_same_max_bytes = 0

We should have max_segments set to 65535 by default.

** Affects: linux (Ubuntu)
     Importance: Undecided
     Assignee: Taco Screen team (taco-screen-team)
         Status: New


** Tags: architecture-ppc64le bugnameltc-142115 severity-high targetmilestone-inin1604

** Tags added: architecture-ppc64le bugnameltc-142115 severity-high
targetmilestone-inin1604

** Changed in: ubuntu
     Assignee: (unassigned) => Taco Screen team (taco-screen-team)

** Package changed: ubuntu => linux (Ubuntu)

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1588449

Title:
  NVMe max_segments queue parameter gets set to 1

Status in linux package in Ubuntu:
  New

Bug description:
  == Comment: #0 - Heitor Ricardo Alves de Siqueira - 2016-06-02 10:41:18 ==
  There are some upstream patches missing from the 16.04 nvme driver, and this limits adapter performance. We need to include these so that NVMe devices are correctly set up.

  I would like to ask Canonical to cherry pick the following patches for the 16.04 kernel:
      * da35825d9a09 ("nvme: set queue limits for the admin queue")
      * 45686b6198bd ("nvme: fix max_segments integer truncation")
      * f21018427cb0 ("block: fix blk_rq_get_max_sectors for driver private requests")
   
  ---uname output---
  Linux ubuntu 4.4.0-22-generic #40-Ubuntu SMP Thu May 12 22:03:35 UTC 2016 ppc64le ppc64le ppc64le GNU/Linux
   
  ---Steps to Reproduce---
  Boot the system with an NVMe adapter connected, and verify the queue parameters:

  root@ubuntu:~# ./queue.sh nvme0n1
  /sys/block/nvme0n1/queue/add_random = 0
  /sys/block/nvme0n1/queue/discard_granularity = 4096
  /sys/block/nvme0n1/queue/discard_max_bytes = 2199023255040
  /sys/block/nvme0n1/queue/discard_max_hw_bytes = 4294966784
  /sys/block/nvme0n1/queue/discard_zeroes_data = 0
  /sys/block/nvme0n1/queue/hw_sector_size = 4096
  /sys/block/nvme0n1/queue/io_poll = 0
  /sys/block/nvme0n1/queue/iostats = 1
  /sys/block/nvme0n1/queue/logical_block_size = 4096
  /sys/block/nvme0n1/queue/max_hw_sectors_kb = 2147483647
  /sys/block/nvme0n1/queue/max_integrity_segments = 0
  /sys/block/nvme0n1/queue/max_sectors_kb = 1280
  /sys/block/nvme0n1/queue/max_segments = 1    <---------------------
  /sys/block/nvme0n1/queue/max_segment_size = 65536
  /sys/block/nvme0n1/queue/minimum_io_size = 4096
  /sys/block/nvme0n1/queue/nomerges = 2
  /sys/block/nvme0n1/queue/nr_requests = 1023
  /sys/block/nvme0n1/queue/optimal_io_size = 0
  /sys/block/nvme0n1/queue/physical_block_size = 4096
  /sys/block/nvme0n1/queue/read_ahead_kb = 128
  /sys/block/nvme0n1/queue/rotational = 0
  /sys/block/nvme0n1/queue/rq_affinity = 1
  /sys/block/nvme0n1/queue/scheduler = none
  /sys/block/nvme0n1/queue/write_same_max_bytes = 0

  We should have max_segments set to 65535 by default.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1588449/+subscriptions


Follow ups