openstack team mailing list archive
-
openstack team
-
Mailing list archive
-
Message #15076
Re: Ceph + Live Migration
On 07/24/2012 05:10 PM, Mark Moseley wrote:
It should work, and if that workaround works, you could instead add
def check_for_export(self, context, volume_id):
pass
I'll try that out. That's a heck of a lot cleaner, plus I just picked
that "if not volume[ 'iscsi_target' ]" because it was the only
attribute I could find, but that was just for messing around. I wasn't
able to find any attribute of the "volume" object that said that it
was an RBD volume.
My real concern is that the check_for_export is somehow important and
I might be missing something else down the road. That said, I've been
able to migrate back and forth just fine after I put that initial hack
in.
All the export-related functions only matter for iscsi or similar
drivers that require a block device on the host. Since qemu talks to
rbd directly, there's no need for those methods to do anything. If
someone wanted to make a volume driver for the kernel rbd module, for
example, then the export methods would make sense.
to the RBDDriver. It looks like the check_for_export method was
added and relied upon without modifying all VolumeDriver subclasses, so
e.g. sheepdog would have the same problem.
If live migration isn't currently possible for RBD, anyone know if
it's in a roadmap?
If this is still a problem in trunk I'll make sure it's fixed before
Folsom is released.
Cool, thanks!
Incidentally (and I can open up a new thread if you like), I was also
going to post here about your quote in Sebastien's article:
<quote>
What’s missing is that OpenStack doesn’t yet have the ability to
initialize a volume from an image. You have to put an image on one
yourself before you can boot from it currently. This should be fixed
in the next version of OpenStack. Booting off of RBD is nice because
you can do live migration, although I haven’t tested that with
OpenStack, just with libvirt. For Folsom, we hope to have
copy-on-write cloning of images as well, so you can store images in
RBD with glance, and provision instances booting off cloned RBD
volumes in very little time.
</quote>
I just wanted to confirm I'm reading it right:
With the above features, we'll be able to use "--block_device_mapping
vda" mapped to an RBD volume *and* a glance-based image on the same
instance and it'll clone that image onto the RBD volume? Is that a
correct interpretation? That'd be indeed sweet.
I'm not sure that'll be the exact interface, but that's the idea.
Right now I'm either booting with an image, which forces me to use
qcow2 and a second RBD-based volume (with the benefit of immediately
running), or booting with both vda and vdb on RBD volumes (but then
have to install the vm from scratch, OS-wise). Of course, I might be
missing a beautiful, already-existing 3rd option that I'm just not
aware of :)
Another option is to do some custom hack with image files and
'rbd rm vol-foo && rbd import file vol-foo' on the backend so you don't
need to copy the data within a vm.
Josh
References