yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #02128
[Bug 1100446] Re: libvirt driver connection validation causes unnecessary process execution with libvirt/qemu
** Changed in: nova/folsom
Status: Fix Committed => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1100446
Title:
libvirt driver connection validation causes unnecessary process
execution with libvirt/qemu
Status in OpenStack Compute (Nova):
Fix Released
Status in OpenStack Compute (nova) folsom series:
Fix Released
Bug description:
A VM transits from BUILD to ACTIVE status can take 26 second with
libvirt/qemu.
This transition is critical in the gate system's performance too.
https://github.com/openstack/nova/blob/c215b5ec79516111456dfc2a63fa0facf5946ab0/nova/virt/libvirt/driver.py#L365
This call should replaced to something cheaper, Like LibVirt Version (or Hostname query .)
Or by an something even cheaper solution.
Note:
The one minute periodical status update also leads to this expensive call. I do not think the architecture changes frequently.
Consider query it only on service start-up.
If you just use the getCapabilies only at startup, you can reduce the
~26 second to ~13 second!
If your qemu supports multiple architecture it is much slower, and by
fixing this issue, you can have even greater performance.
You can see the executions done by the libvirtd by this.
strace -Ff -p <libvirtd_pid> -e execve
You will see several hundred/ or thousands(multi arch) of similar
execve lines:
29010 +++ exited with 0 +++
5382 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=29010, si_status=0, si_utime=1, si_stime=0} ---
29011 execve("/usr/bin/qemu-system-x86_64", ["/usr/bin/qemu-system-x86_64", "-device", "?", "-device", "pci-assign,?", "-device", "virtio-blk-pci,?", "-device", "virtio-net-pci,?", "-device", "scsi-disk,?"], [/* 2 vars */]) = 0
29011 +++ exited with 0 +++
5382 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=29011, si_status=0, si_utime=1, si_stime=0} ---
29012 execve("/usr/bin/qemu-system-x86_64", ["/usr/bin/qemu-system-x86_64", "-cpu", "?"], [/* 2 vars */]) = 0
29012 +++ exited with 0 +++
5382 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=29012, si_status=0, si_utime=1, si_stime=0} ---
29013 execve("/usr/bin/qemu-system-x86_64", ["/usr/bin/qemu-system-x86_64", "-help"], [/* 2 vars */]) = 0
29013 +++ exited with 0 +++
5382 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=29013, si_status=0, si_utime=1, si_stime=0} ---
29014 execve("/usr/bin/qemu-system-x86_64", ["/usr/bin/qemu-system-x86_64", "-device", "?", "-device", "pci-assign,?", "-device", "virtio-blk-pci,?", "-device", "virtio-net-pci,?", "-device", "scsi-disk,?"], [/* 2 vars */]) = 0
29014 +++ exited with 0 +++
(you can add a -ttt argument for time measurement )
To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1100446/+subscriptions