kernel-packages team mailing list archive
-
kernel-packages team
-
Mailing list archive
-
Message #77579
[Bug 1297522] Re: Since Trusty /proc/diskstats shows weird values
** Description changed:
+ SRU Justification:
+
+ Impact: Tools that rely on diskstats may report incorrect data in
+ certain conditions. In particular diskstats in a VM may report incorrect
+ statistics.
+
+ Fix: 0fec08b4ecfc36fd8a64432343b2964fb86d2675 ( in 3.14-rc1 )
+
+ Testcase:
+ - Install a VM with the affected kernel
+ - Run cat /proc/diskstats | awk '$3=="vda" { print $7/$4, $11/$8 }'
+ - If the two values are much larger compared to the v3.14-rc1 kernel in the same VM, we have failed. For example in a failing case I see: "132.44 5458.34"; in a passing case I see: "0.19334 5.90476".
+
+ --
+
After upgrading some virtual machines (KVM) to Trusty I noticed really
high I/O wait times, e.g. Munin graphs now show up to 200 seconds(!)
read I/O wait time. See attached image. Of course real latency isn't
higher than before, it's only /proc/diskstats that shows totally wrong
numbers...
$ cat /proc/diskstats | awk '$3=="vda" { print $7/$4, $11/$8 }'
1375.44 13825.1
From the documentation for /proc/diskstats field 4 is total number of
reads completed, field 7 is the total time spent reading in
milliseconds, and fields 8 and 11 are the same for writes. So above
numbers are the average read and write latency in milliseconds.
Same weird numbers with iowait. Note the column "await" (average time
in milliseconds for I/O requests):
$ iostat -dx 1 60
Linux 3.13.0-19-generic (munin) 03/25/14 _x86_64_ (2 CPU)
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
vda 2.30 16.75 72.45 24.52 572.79 778.37 27.87 1.57 620.00 450.20 1121.83 1.71 16.54
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
vda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
vda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
vda 0.00 52.00 0.00 25.00 0.00 308.00 24.64 0.30 27813.92 0.00 27813.92 0.48 1.20
I upgraded the host system to Trusty too, however there /proc/diskstats
output is normal as before.
$ uname -r
3.13.0-19-generic
** Changed in: linux (Ubuntu)
Status: Confirmed => Fix Released
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1297522
Title:
Since Trusty /proc/diskstats shows weird values
Status in “linux” package in Ubuntu:
Fix Released
Status in “linux” source package in Trusty:
In Progress
Bug description:
SRU Justification:
Impact: Tools that rely on diskstats may report incorrect data in
certain conditions. In particular diskstats in a VM may report
incorrect statistics.
Fix: 0fec08b4ecfc36fd8a64432343b2964fb86d2675 ( in 3.14-rc1 )
Testcase:
- Install a VM with the affected kernel
- Run cat /proc/diskstats | awk '$3=="vda" { print $7/$4, $11/$8 }'
- If the two values are much larger compared to the v3.14-rc1 kernel in the same VM, we have failed. For example in a failing case I see: "132.44 5458.34"; in a passing case I see: "0.19334 5.90476".
--
After upgrading some virtual machines (KVM) to Trusty I noticed really
high I/O wait times, e.g. Munin graphs now show up to 200 seconds(!)
read I/O wait time. See attached image. Of course real latency isn't
higher than before, it's only /proc/diskstats that shows totally wrong
numbers...
$ cat /proc/diskstats | awk '$3=="vda" { print $7/$4, $11/$8 }'
1375.44 13825.1
From the documentation for /proc/diskstats field 4 is total number of
reads completed, field 7 is the total time spent reading in
milliseconds, and fields 8 and 11 are the same for writes. So above
numbers are the average read and write latency in milliseconds.
Same weird numbers with iowait. Note the column "await" (average time
in milliseconds for I/O requests):
$ iostat -dx 1 60
Linux 3.13.0-19-generic (munin) 03/25/14 _x86_64_ (2 CPU)
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
vda 2.30 16.75 72.45 24.52 572.79 778.37 27.87 1.57 620.00 450.20 1121.83 1.71 16.54
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
vda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
vda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
vda 0.00 52.00 0.00 25.00 0.00 308.00 24.64 0.30 27813.92 0.00 27813.92 0.48 1.20
I upgraded the host system to Trusty too, however there
/proc/diskstats output is normal as before.
$ uname -r
3.13.0-19-generic
To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1297522/+subscriptions
References