group.of.nepali.translators team mailing list archive
-
group.of.nepali.translators team
-
Mailing list archive
-
Message #10814
[Bug 1662378] Re: many OOMs on busy xenial IMAP server with lots of available memory
Did this issue start happening after an update/upgrade? Was there a
kernel version where you were not having this particular problem? This
will help determine if the problem you are seeing is the result of a
regression, and when this regression was introduced. If this is a
regression, we can perform a kernel bisect to identify the commit that
introduced the problem.
** Changed in: linux (Ubuntu)
Importance: Undecided => High
** Also affects: linux (Ubuntu Xenial)
Importance: Undecided
Status: New
** Changed in: linux (Ubuntu Xenial)
Status: New => Confirmed
** Changed in: linux (Ubuntu Xenial)
Importance: Undecided => High
** Tags added: kernel-key xenial
--
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1662378
Title:
many OOMs on busy xenial IMAP server with lots of available memory
Status in linux package in Ubuntu:
Confirmed
Status in linux source package in Xenial:
Confirmed
Bug description:
We recently noticed that a busy xenial IMAP server with about 22 days
uptime has been logging a lot of OOM messages. The machine has 24G of
memory. Below please find some typical memory info.
I noted that there was about 11G of memory allocated to slab, and
since all of the oom-killer invoked messages report order=2 or order=3
(see below for gfp_flags) I thought that maybe fragmentation was a
factor. After doing echo 2 > /proc/sys/vm/drop_caches to release
reclaimable slab memory, no OOMs were logged for around 30 minutes,
but then they started up again, although perhaps not as frequently as
before, and the amount of memory in slab was back up around its former
size. To the best of my knowledge we do not have any custom VM-
related sysctl tweaks on this machine.
Attached please find version.log and lspci-vnvn.log. And here's a link to a kern.log from a little before time of boot onwards, containing all of the oom-killer messages:
https://people.canonical.com/~pjdc/grenadilla-sanitized-kern.log.xz
== Breakdown of Failed Allocations ===
pjdc@grenadilla:~$ grep -o 'gfp_mask=.*, order=.' kern.log-version-with-oom-killer-invoked | sort | uniq -c | sort -n
1990 gfp_mask=0x26000c0, order=2
4043 gfp_mask=0x240c0c0, order=3
pjdc@grenadilla:~$ _
== Representative (Probably) Memory Info ==
pjdc@grenadilla:~$ free -m
total used free shared buff/cache available
Mem: 24097 1762 213 266 22121 21087
Swap: 17492 101 17391
pjdc@grenadilla:~$ cat /proc/meminfo
MemTotal: 24676320 kB
MemFree: 219440 kB
MemAvailable: 21593416 kB
Buffers: 6186648 kB
Cached: 4255608 kB
SwapCached: 3732 kB
Active: 7593140 kB
Inactive: 4404824 kB
Active(anon): 1319736 kB
Inactive(anon): 508544 kB
Active(file): 6273404 kB
Inactive(file): 3896280 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 17912436 kB
SwapFree: 17808972 kB
Dirty: 524 kB
Writeback: 0 kB
AnonPages: 1553244 kB
Mapped: 219868 kB
Shmem: 272576 kB
Slab: 12209796 kB
SReclaimable: 11572836 kB
SUnreclaim: 636960 kB
KernelStack: 14464 kB
PageTables: 54864 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 30250596 kB
Committed_AS: 2640808 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 0 kB
VmallocChunk: 0 kB
HardwareCorrupted: 0 kB
AnonHugePages: 18432 kB
CmaTotal: 0 kB
CmaFree: 0 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 2371708 kB
DirectMap2M: 22784000 kB
pjdc@grenadilla:~$
To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1662378/+subscriptions