group.of.nepali.translators team mailing list archive
-
group.of.nepali.translators team
-
Mailing list archive
-
Message #15871
[Bug 1708305] Re: Realtime feature mlockall: Cannot allocate memory
This bug was fixed in the package libvirt - 2.5.0-3ubuntu5.5
---------------
libvirt (2.5.0-3ubuntu5.5) zesty; urgency=medium
* d/p/bug-1708305-qemu-Fix-memory-locking-limit-calculation.patch:
Remove memlock limit when using <memoryBacking><locked/>.
(LP: #1708305).
-- Jorge Niedbalski <jorge.niedbalski@xxxxxxxxxxxxx> Fri, 11 Aug 2017
00:34:01 -0400
** Changed in: libvirt (Ubuntu Zesty)
Status: Fix Committed => Fix Released
--
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1708305
Title:
Realtime feature mlockall: Cannot allocate memory
Status in Ubuntu Cloud Archive:
Fix Committed
Status in Ubuntu Cloud Archive mitaka series:
In Progress
Status in Ubuntu Cloud Archive ocata series:
In Progress
Status in libvirt package in Ubuntu:
Fix Released
Status in libvirt source package in Xenial:
Fix Committed
Status in libvirt source package in Zesty:
Fix Released
Status in libvirt source package in Artful:
Fix Released
Bug description:
[Impact]
* Guest definitions that uses locked memory + hugepages fail to spawn
* Backport upstream fix to solve that issue
[Environment]
root@buneary:~# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 16.04.2 LTS
Release: 16.04
Codename: xenial
root@buneary:~# uname -r
4.10.0-29-generic
Reproducible also with the 4.4 kernel.
[Detailed Description]
When a guest memory backing stanza is defined using the <locked/> stanza + hugepages,
as follows:
<memoryBacking>
<hugepages>
<page size='1' unit='GiB' nodeset='0'/>
<page size='1' unit='GiB' nodeset='1'/>
</hugepages>
<nosharedpages/>
<locked/>
</memoryBacking>
(Full guest definition: http://paste.ubuntu.com/25229162/)
The guest fails to start due to the following error:
2017-08-02 20:25:03.714+0000: starting up libvirt version: 1.3.1, package: 1ubuntu10.12 (Christian Ehrhardt <christian.ehrhardt@xxxxxxxxxxxxx> Wed, 19 Jul 2017 08:28:14 +0200), qemu version: 2.5.0 (Debian 1:2.5+dfsg-5ubuntu10.14), hostname: buneary.seg
LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin QEMU_AUDIO_DRV=none /usr/bin/kvm-spice -name reproducer2 -S -machine pc-i440fx-2.5,accel=kvm,usb=off -cpu host -m 124928 -realtime mlock=on -smp 32,sockets=16,cores=1,threads=2 -object memory-backend-file,id=ram-node0,prealloc=yes,mem-path=/dev/hugepages/libvirt/qemu,share=yes,size=64424509440,host-nodes=0,policy=bind -numa node,nodeid=0,cpus=0-15,memdev=ram-node0 -object memory-backend-file,id=ram-node1,prealloc=yes,mem-path=/dev/hugepages/libvirt/qemu,share=yes,size=66571993088,host-nodes=1,policy=bind -numa node,nodeid=1,cpus=16-31,memdev=ram-node1 -uuid 2460778d-979b-4024-9a13-0c3ca04b18ec -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-reproducer2/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/var/lib/uvtool/libvirt/images/test-ds.qcow,format=qcow2,if=none,id=drive-virtio-disk0,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x3,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -vnc 127.0.0.1:0 -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4 -msg timestamp=on
Domain id=14 is tainted: host-cpu
char device redirected to /dev/pts/1 (label charserial0)
mlockall: Cannot allocate memory
2017-08-02T20:25:37.732772Z qemu-system-x86_64: locking memory failed
2017-08-02 20:25:37.811+0000: shutting down
This seems to be due to the setrlimit for RLIMIT_MEMLOCK is too low for mlockall
to work given the large amount of memory.
There is a libvirt upstream patch that enforces the existence of the
hard_limit stanza when using with <locked/> in the memory backing settings.
https://github.com/libvirt/libvirt/commit/c2e60ad0e5124482942164e5fec088157f5e716a
Memory locking can only work properly if the memory locking limit
for the QEMU process has been raised appropriately: the default one
is extremely low, so there's no way the guest will fit in there.
The commit
https://github.com/libvirt/libvirt/commit/7e667664d28f90bf6916604a55ebad7e2d85305b
is also required when using hugepages and the locked stanza.
[Test Case]
* Define a guest that uses the following stanzas (See for a full guest reference: http://paste.ubuntu.com/25288141/)
<memory unit='GiB'>120</memory>
<currentMemory unit='GiB'>120</currentMemory>
<memoryBacking>
<hugepages>
<page size='1' unit='GiB' nodeset='0'/>
<page size='1' unit='GiB' nodeset='1'/>
</hugepages>
<nosharedpages/>
<locked/>
</memoryBacking>
* virsh define guest.xml
* virsh start guest.xml
* Without the fix, the following error will be raised and the guest
will not start.
root@buneary:/home/ubuntu# virsh start reproducer2
error: Failed to start domain reproducer2
error: internal error: process exited while connecting to monitor: mlockall: Cannot allocate memory
2017-08-11T03:59:54.936275Z qemu-system-x86_64: locking memory failed
* With the fix, the error shouldn't be displayed and the guest started
[Suggested Fix]
*
https://github.com/libvirt/libvirt/commit/7e667664d28f90bf6916604a55ebad7e2d85305b
(Proposed)
[Regression Potential]
* There is one (theoretical) thing to think of - the change increases
the lock resource limits for the spawned qemu. Maybe that could be
used as an attack. Fortunately the defnition does have neither
locked nor hugepages by default so opt-in, and while the guest can
do all kind of things when exploited it can only hardly change it's
(virtual) physical memory to increase what the host might be locking.
Yet worth to mention IMHO
* The general regression is rather low as I said "locked" is not
default and the code path is only affecting domains configured that
way. That makes the change rather safe for the overall user - and the
one using locked likely need it.
To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1708305/+subscriptions