group.of.nepali.translators team mailing list archive
-
group.of.nepali.translators team
-
Mailing list archive
-
Message #04474
[Bug 1588449] Re: NVMe max_segments queue parameter gets set to 1
** Also affects: linux (Ubuntu Xenial)
Importance: Undecided
Status: New
** Also affects: linux (Ubuntu Yakkety)
Importance: Undecided
Assignee: Taco Screen team (taco-screen-team)
Status: New
** Changed in: linux (Ubuntu Yakkety)
Status: New => Fix Released
** Changed in: linux (Ubuntu Yakkety)
Assignee: Taco Screen team (taco-screen-team) => (unassigned)
** Changed in: linux (Ubuntu Xenial)
Status: New => In Progress
** Changed in: linux (Ubuntu Xenial)
Assignee: (unassigned) => Tim Gardner (timg-tpi)
--
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1588449
Title:
NVMe max_segments queue parameter gets set to 1
Status in linux package in Ubuntu:
Fix Released
Status in linux source package in Xenial:
In Progress
Status in linux source package in Yakkety:
Fix Released
Bug description:
== Comment: #0 - Heitor Ricardo Alves de Siqueira - 2016-06-02 10:41:18 ==
There are some upstream patches missing from the 16.04 nvme driver, and this limits adapter performance. We need to include these so that NVMe devices are correctly set up.
I would like to ask Canonical to cherry pick the following patches for the 16.04 kernel:
* da35825d9a09 ("nvme: set queue limits for the admin queue")
* 45686b6198bd ("nvme: fix max_segments integer truncation")
* f21018427cb0 ("block: fix blk_rq_get_max_sectors for driver private requests")
---uname output---
Linux ubuntu 4.4.0-22-generic #40-Ubuntu SMP Thu May 12 22:03:35 UTC 2016 ppc64le ppc64le ppc64le GNU/Linux
---Steps to Reproduce---
Boot the system with an NVMe adapter connected, and verify the queue parameters:
root@ubuntu:~# ./queue.sh nvme0n1
/sys/block/nvme0n1/queue/add_random = 0
/sys/block/nvme0n1/queue/discard_granularity = 4096
/sys/block/nvme0n1/queue/discard_max_bytes = 2199023255040
/sys/block/nvme0n1/queue/discard_max_hw_bytes = 4294966784
/sys/block/nvme0n1/queue/discard_zeroes_data = 0
/sys/block/nvme0n1/queue/hw_sector_size = 4096
/sys/block/nvme0n1/queue/io_poll = 0
/sys/block/nvme0n1/queue/iostats = 1
/sys/block/nvme0n1/queue/logical_block_size = 4096
/sys/block/nvme0n1/queue/max_hw_sectors_kb = 2147483647
/sys/block/nvme0n1/queue/max_integrity_segments = 0
/sys/block/nvme0n1/queue/max_sectors_kb = 1280
/sys/block/nvme0n1/queue/max_segments = 1 <---------------------
/sys/block/nvme0n1/queue/max_segment_size = 65536
/sys/block/nvme0n1/queue/minimum_io_size = 4096
/sys/block/nvme0n1/queue/nomerges = 2
/sys/block/nvme0n1/queue/nr_requests = 1023
/sys/block/nvme0n1/queue/optimal_io_size = 0
/sys/block/nvme0n1/queue/physical_block_size = 4096
/sys/block/nvme0n1/queue/read_ahead_kb = 128
/sys/block/nvme0n1/queue/rotational = 0
/sys/block/nvme0n1/queue/rq_affinity = 1
/sys/block/nvme0n1/queue/scheduler = none
/sys/block/nvme0n1/queue/write_same_max_bytes = 0
We should have max_segments set to 65535 by default.
To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1588449/+subscriptions