← Back to team overview

openstack team mailing list archive

Re: Libvirt Snapshots


Pedantry: It's QEMU/KVM, not libvirt, that holds the disks open.  The
pedantry does make a difference here I think...

A more sustainable option than being on the bleeding edge of libvirt
may be to try to bypass libvirt and issue those safe QEMU monitor
commands directly.  Libvirt would normally prevent this, but it looks
like there is a QEMU monitor command built into libvirt:

So what does a libvirt snapshot actually do?  Here's the code:

qemuDomainSnapshotCreateXML is the main function; it looks like we
That calls qemuDomainSnapshotCreateDiskActive, which then pauses the
VM, snapshots each disk in series using the QEMU monitor, and then
resumes the VM.

We should do the same thing, but better:

Suspend the domain using libvirt
Snapshot each disk we want to snapshot _in parallel_, by using the
libvirt QEMU monitor pass-through.  Remote volumes could use the
correct driver so that e.g. a SAN disk could make use of a hardware
Resume the domain using libvirt

For essex, it  sounds like step 2 probably means "snapshot the root
volume only" using qemu, and don't snapshot remote volumes.

Post Essex:
We could try to avoid suspending / resuming the domain, by using a
filesystem thaw.  It looks like libvirt has some (very new) support
for this, but as this relies on a QEMU guest agent, I suspect we'd do
better to roll our own here that could be cross-hypervisor
We can allow selective snapshotting of disks.  On a database, for
example, you really do want to snapshot all the disks together.
We could also support "optimistiic" snapshots, which just do a
snapshot without suspending anything.  The use-case is that the caller
issues a filesystem thaw e.g. over SSH,  then requests a snapshot on
each disk that they care about though Openstack, then resumes normal
operation over SSH.

I actually like that third option the best.  I'd like to be able to
snapshot the root volume like I do any other volume, and I'd like to
be in control of the suspension mechanism.  Suspending the entire VM
is a little extreme, particularly if I'm using a
filesystem/application that offers me a lower-impact alternative!
However, an easy way to "just snapshot everything" with a single call
is also attractive, and I'd imagine people using OpenStack directly
(rather than though code) would definitely use that.


On Fri, Mar 9, 2012 at 12:18 AM, Daniel P. Berrange <berrange@xxxxxxxxxx> wrote:
> On Thu, Mar 08, 2012 at 06:02:54PM -0800, Vishvananda Ishaya wrote:
> > So I could use some specific feedback from kvm/libvirt folks on the following questions:
> >
> > a) is it safe to use qemu-img to create/delete a snapshot in a disk file that libvirt is writing to.
> > if not:
> > b) is it safe to use qemu-img to delete a snapshot in a disk file that libvirt is writing to but not actively using.
> > if not:
> > c) is it safe to use qemu-img to create/delete a snapshot in a disk file that libvirt has an open file handle to.
> Sadly, the answer is no to all those questions. For Qcow2 files, using
> internal snapshots, you cannot make *any* changes to the qcow2 file,
> while QEMU has it open. The reasons are that QEMU may have metadata
> changes pending to the file which have not yet flushed to disk, and
> second, creating/deleteing the snapshot with qemu-img may cause
> metadat changes that QEMU won't be aware of. Either way you will likely
> cause corruption of the qcow2 file.
> For these reasons, QEMU provides monitor commands for snapshotting,
> that libvirt uses whenever the guest is running. Libvirt will only
> use qemu-img, if the the guest is offline.
> Regards,
> Daniel
> --
> |: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
> |: http://libvirt.org              -o-             http://virt-manager.org :|
> |: http://autobuild.org       -o-         http://search.cpan.org/~danberr/ :|
> |: http://entangle-photo.org       -o-       http://live.gnome.org/gtk-vnc :|
> _______________________________________________
> Mailing list: https://launchpad.net/~openstack
> Post to     : openstack@xxxxxxxxxxxxxxxxxxx
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp

Follow ups