kernel-packages team mailing list archive
-
kernel-packages team
-
Mailing list archive
-
Message #17798
[Bug 1100843] Re: Live Migration Causes Performance Issues
>From my testing this has been fixed in the saucy version (1.5.0) of qemu. It is fixed by this patch:
f1c72795af573b24a7da5eb52375c9aba8a37972
However later in the history this commit was reverted, and again broke this. The other commit that fixes this is:
211ea74022f51164a7729030b28eec90b6c99a08
So 211ea740 needs to be backported to P/Q/R to fix this issue. I have a v1 packages of a precise backport here, I've confirmed performance differences between savevm/loadvm cycles:
http://people.canonical.com/~arges/lp1100843/precise/
** No longer affects: linux (Ubuntu)
** Also affects: qemu-kvm (Ubuntu Precise)
Importance: Undecided
Status: New
** Also affects: qemu-kvm (Ubuntu Quantal)
Importance: Undecided
Status: New
** Also affects: qemu-kvm (Ubuntu Raring)
Importance: Undecided
Status: New
** Also affects: qemu-kvm (Ubuntu Saucy)
Importance: High
Assignee: Chris J Arges (arges)
Status: In Progress
** Changed in: qemu-kvm (Ubuntu Precise)
Assignee: (unassigned) => Chris J Arges (arges)
** Changed in: qemu-kvm (Ubuntu Quantal)
Assignee: (unassigned) => Chris J Arges (arges)
** Changed in: qemu-kvm (Ubuntu Raring)
Assignee: (unassigned) => Chris J Arges (arges)
** Changed in: qemu-kvm (Ubuntu Precise)
Importance: Undecided => High
** Changed in: qemu-kvm (Ubuntu Quantal)
Importance: Undecided => High
** Changed in: qemu-kvm (Ubuntu Raring)
Importance: Undecided => High
** Changed in: qemu-kvm (Ubuntu Saucy)
Assignee: Chris J Arges (arges) => (unassigned)
** Changed in: qemu-kvm (Ubuntu Saucy)
Status: In Progress => Fix Released
** Changed in: qemu-kvm (Ubuntu Raring)
Status: New => Triaged
** Changed in: qemu-kvm (Ubuntu Quantal)
Status: New => Triaged
** Changed in: qemu-kvm (Ubuntu Precise)
Status: New => In Progress
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1100843
Title:
Live Migration Causes Performance Issues
Status in QEMU:
New
Status in “qemu-kvm” package in Ubuntu:
Fix Released
Status in “qemu-kvm” source package in Precise:
In Progress
Status in “qemu-kvm” source package in Quantal:
Triaged
Status in “qemu-kvm” source package in Raring:
Triaged
Status in “qemu-kvm” source package in Saucy:
Fix Released
Bug description:
I have 2 physical hosts running Ubuntu Precise. With 1.0+noroms-
0ubuntu14.7 and qemu-kvm 1.2.0+noroms-0ubuntu7 (source from quantal,
built for Precise with pbuilder.) I attempted to build qemu-1.3.0 debs
from source to test, but libvirt seems to have an issue with it that I
haven't been able to track down yet.
I'm seeing a performance degradation after live migration on Precise,
but not Lucid. These hosts are managed by libvirt (tested both
0.9.8-2ubuntu17 and 1.0.0-0ubuntu4) in conjunction with OpenNebula. I
don't seem to have this problem with lucid guests (running a number of
standard kernels, 3.2.5 mainline and backported linux-
image-3.2.0-35-generic as well.)
I first noticed this problem with phoronix doing compilation tests,
and then tried lmbench where even simple calls experience performance
degradation.
I've attempted to post to the kvm mailing list, but so far the only
suggestion was it may be related to transparent hugepages not being
used after migration, but this didn't pan out. Someone else has a
similar problem here -
http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592
qemu command line example: /usr/bin/kvm -name one-2 -S -M pc-1.2 -cpu
Westmere -enable-kvm -m 73728 -smp 16,sockets=2,cores=8,threads=1
-uuid f89e31a4-4945-c12c-6544-149ba0746c2f -no-user-config -nodefaults
-chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-2.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc
base=utc,driftfix=slew -no-kvm-pit-reinjection -no-shutdown -device
piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
file=/var/lib/one//datastores/0/2/disk.0,if=none,id=drive-virtio-
disk0,format=raw,cache=none -device virtio-blk-
pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-
disk0,bootindex=1 -drive
file=/var/lib/one//datastores/0/2/disk.1,if=none,id=drive-
ide0-0-0,readonly=on,format=raw -device ide-cd,bus=ide.0,unit=0,drive
=drive-ide0-0-0,id=ide0-0-0 -netdev
tap,fd=23,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net-
pci,netdev=hostnet0,id=net0,mac=02:00:0a:64:02:fe,bus=pci.0,addr=0x3
-vnc 0.0.0.0:2,password -vga cirrus -incoming tcp:0.0.0.0:49155
-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
Disk backend is LVM running on SAN via FC connection (using symlink
from /var/lib/one/datastores/0/2/disk.0 above)
ubuntu-12.04 - first boot
==========================================
Simple syscall: 0.0527 microseconds
Simple read: 0.1143 microseconds
Simple write: 0.0953 microseconds
Simple open/close: 1.0432 microseconds
Using phoronix pts/compuational
ImageMagick - 31.54s
Linux Kernel 3.1 - 43.91s
Mplayer - 30.49s
PHP - 22.25s
ubuntu-12.04 - post live migration
==========================================
Simple syscall: 0.0621 microseconds
Simple read: 0.2485 microseconds
Simple write: 0.2252 microseconds
Simple open/close: 1.4626 microseconds
Using phoronix pts/compilation
ImageMagick - 43.29s
Linux Kernel 3.1 - 76.67s
Mplayer - 45.41s
PHP - 29.1s
I don't have phoronix results for 10.04 handy, but they were within 1% of each other...
ubuntu-10.04 - first boot
==========================================
Simple syscall: 0.0524 microseconds
Simple read: 0.1135 microseconds
Simple write: 0.0972 microseconds
Simple open/close: 1.1261 microseconds
ubuntu-10.04 - post live migration
==========================================
Simple syscall: 0.0526 microseconds
Simple read: 0.1075 microseconds
Simple write: 0.0951 microseconds
Simple open/close: 1.0413 microseconds
To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1100843/+subscriptions