kernel-packages team mailing list archive
-
kernel-packages team
-
Mailing list archive
-
Message #129018
[Bug 1479362] Re: .hpacucli: page allocation failure: order:4, mode:0x40d0
Would it be possible for you to test the latest upstream kernel? Refer
to https://wiki.ubuntu.com/KernelMainlineBuilds . Please test the latest
v4.2 kernel[0].
If this bug is fixed in the mainline kernel, please add the following
tag 'kernel-fixed-upstream'.
If the mainline kernel does not fix this bug, please add the tag:
'kernel-bug-exists-upstream'.
Once testing of the upstream kernel is complete, please mark this bug as
"Confirmed".
Thanks in advance.
[0] http://kernel.ubuntu.com/~kernel-ppa/mainline/v4.2-rc4-unstable/
** Changed in: linux (Ubuntu)
Importance: Undecided => Medium
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1479362
Title:
.hpacucli: page allocation failure: order:4, mode:0x40d0
Status in linux package in Ubuntu:
Incomplete
Bug description:
On two identical physical machines ProLiant BL460c G7, with Ubuntu 12.04.5 LTS, kernel 3.2.0-83-generic we frequently detect errors (warnings?) with .hpacuci binary, which is using HPSA driver.
Here is the output from dmesg log :
[6411414.861596] .hpacucli: page allocation failure: order:4, mode:0x40d0
[6411414.861601] Pid: 9903, comm: .hpacucli Not tainted 3.2.0-83-generic #120-Ubuntu
[6411414.861603] Call Trace:
[6411414.861613] [<ffffffff8111f93d>] warn_alloc_failed+0xfd/0x150
[6411414.861619] [<ffffffff8164f84d>] ? __alloc_pages_direct_compact+0x166/0x178
[6411414.861622] [<ffffffff811236c7>] __alloc_pages_nodemask+0x6d7/0x8f0
[6411414.861627] [<ffffffff8115b1a6>] alloc_pages_current+0xb6/0x120
[6411414.861630] [<ffffffff8111e95e>] __get_free_pages+0xe/0x40
[6411414.861635] [<ffffffff81165bbf>] kmalloc_order_trace+0x3f/0xd0
[6411414.861638] [<ffffffff81166735>] __kmalloc+0x185/0x190
[6411414.861647] [<ffffffffa007875d>] hpsa_passthru_ioctl+0x1ed/0x3e0 [hpsa]
[6411414.861652] [<ffffffff8119b3af>] ? mntput+0x1f/0x30
[6411414.861657] [<ffffffffa0079cc0>] hpsa_ioctl+0xa0/0x120 [hpsa]
[6411414.861662] [<ffffffff8142a67c>] scsi_ioctl+0xbc/0x410
[6411414.861666] [<ffffffff81455978>] sg_ioctl+0x318/0xda0
[6411414.861670] [<ffffffff8118b9b2>] ? do_filp_open+0x42/0xa0
[6411414.861673] [<ffffffff8145643c>] sg_unlocked_ioctl+0x3c/0x60
[6411414.861676] [<ffffffff8118ddfa>] do_vfs_ioctl+0x8a/0x340
[6411414.861679] [<ffffffff81186e55>] ? putname+0x35/0x50
[6411414.861683] [<ffffffff8117aeec>] ? do_sys_open+0x17c/0x240
[6411414.861686] [<ffffffff8118e141>] sys_ioctl+0x91/0xa0
[6411414.861691] [<ffffffff8166d402>] system_call_fastpath+0x16/0x1b
[6411414.861693] Mem-Info:
[6411414.861694] Node 0 DMA per-cpu:
[6411414.861697] CPU 0: hi: 0, btch: 1 usd: 0
[6411414.861699] CPU 1: hi: 0, btch: 1 usd: 0
[6411414.861701] CPU 2: hi: 0, btch: 1 usd: 0
[6411414.861702] CPU 3: hi: 0, btch: 1 usd: 0
[6411414.861704] CPU 4: hi: 0, btch: 1 usd: 0
[6411414.861706] CPU 5: hi: 0, btch: 1 usd: 0
[6411414.861708] CPU 6: hi: 0, btch: 1 usd: 0
[6411414.861710] CPU 7: hi: 0, btch: 1 usd: 0
[6411414.861711] Node 0 DMA32 per-cpu:
[6411414.861713] CPU 0: hi: 186, btch: 31 usd: 0
[6411414.861715] CPU 1: hi: 186, btch: 31 usd: 0
[6411414.861717] CPU 2: hi: 186, btch: 31 usd: 0
[6411414.861719] CPU 3: hi: 186, btch: 31 usd: 0
[6411414.861720] CPU 4: hi: 186, btch: 31 usd: 0
[6411414.861722] CPU 5: hi: 186, btch: 31 usd: 0
[6411414.861724] CPU 6: hi: 186, btch: 31 usd: 0
[6411414.861726] CPU 7: hi: 186, btch: 31 usd: 0
[6411414.861727] Node 0 Normal per-cpu:
[6411414.861729] CPU 0: hi: 90, btch: 15 usd: 0
[6411414.861731] CPU 1: hi: 90, btch: 15 usd: 0
[6411414.861733] CPU 2: hi: 90, btch: 15 usd: 0
[6411414.861734] CPU 3: hi: 90, btch: 15 usd: 0
[6411414.861736] CPU 4: hi: 90, btch: 15 usd: 0
[6411414.861738] CPU 5: hi: 90, btch: 15 usd: 0
[6411414.861740] CPU 6: hi: 90, btch: 15 usd: 0
[6411414.861741] CPU 7: hi: 90, btch: 15 usd: 0
[6411414.861745] active_anon:5156 inactive_anon:10866 isolated_anon:0
[6411414.861746] active_file:84622 inactive_file:87587 isolated_file:0
[6411414.861747] unevictable:0 dirty:304 writeback:0 unstable:0
[6411414.861748] free:241522 slab_reclaimable:510564 slab_unreclaimable:7690
[6411414.861749] mapped:5597 shmem:1610 pagetables:2154 bounce:0
[6411414.861751] Node 0 DMA free:15912kB min:256kB low:320kB high:384kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15656kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes
[6411414.861760] lowmem_reserve[]: 0 3814 4003 4003
[6411414.861764] Node 0 DMA32 free:944144kB min:64148kB low:80184kB high:96220kB active_anon:19280kB inactive_anon:41872kB active_file:328288kB inactive_file:340140kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:3905984kB mlocked:0kB dirty:728kB writeback:0kB mapped:20904kB shmem:6428kB slab_reclaimable:2020944kB slab_unreclaimable:17272kB kernel_stack:1096kB pagetables:7912kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
[6411414.861773] lowmem_reserve[]: 0 0 188 188
[6411414.861777] Node 0 Normal free:6032kB min:3176kB low:3968kB high:4764kB active_anon:1344kB inactive_anon:1592kB active_file:10200kB inactive_file:10208kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:193532kB mlocked:0kB dirty:488kB writeback:0kB mapped:1484kB shmem:12kB slab_reclaimable:21312kB slab_unreclaimable:13488kB kernel_stack:720kB pagetables:704kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
[6411414.861786] lowmem_reserve[]: 0 0 0 0
[6411414.861789] Node 0 DMA: 0*4kB 1*8kB 0*16kB 1*32kB 2*64kB 1*128kB 1*256kB 0*512kB 1*1024kB 1*2048kB 3*4096kB = 15912kB
[6411414.861797] Node 0 DMA32: 123739*4kB 53007*8kB 1121*16kB 94*32kB 2*64kB 1*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 1*4096kB = 944308kB
[6411414.861805] Node 0 Normal: 1140*4kB 53*8kB 7*16kB 4*32kB 4*64kB 1*128kB 2*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 6120kB
[6411414.861814] 174884 total pagecache pages
[6411414.861815] 1061 pages in swap cache
[6411414.861817] Swap cache stats: add 914226, delete 913165, find 48681128/48755215
[6411414.861819] Free swap = 1008500kB
[6411414.861820] Total swap = 1048572kB
[6411414.874978] 1064943 pages RAM
[6411414.874981] 89075 pages reserved
[6411414.874983] 177200 pages shared
[6411414.874986] 572828 pages non-shared
As we can see, there is lots of free RAM and each mem zone has at
least one bloc of size 64kB available (.hpacucli reclaims 16 pages) so
it doesn't seem to be related to memory fragmentation.
Thanks :)
To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1479362/+subscriptions