← Back to team overview

openstack team mailing list archive

Ceph + Live Migration

 

This is more of a sanity check than anything else:

Does the RBDDriver in Diablo support live migration?

I was playing with this yesterday and couldn't get live migration to
succeed. The errors I was getting seem to trace back to the fact that
the RBDDriver doesn't override VolumeDriver's check_for_export call,
which just defaults to "NotImplementedError". Looking at the latest
Folsom code, there's still not a check_for_export call in the
RBDDriver, which makes me think that the fact that in my PoC install
(which I've switched between qcow2, iscsi and ceph repeatedly), I've
somehow made something unhappy.

2012-07-24 13:44:29 TRACE nova.compute.manager [instance:
79c8c14f-43bf-4eaf-af94-bc578c82f921] [u'Traceback (most recent call
last):\n', u'  File
"/usr/lib/python2.7/dist-packages/nova/rpc/amqp.py", line 253, in
_process_data\n    rval = node_func(context=ctxt, **node_args)\n', u'
File "/usr/lib/python2.7/dist-packages/nova/volume/manager.py", line
294, in check_for_export\n    self.driver.check_for_export(context,
volume[\'id\'])\n', u'  File
"/usr/lib/python2.7/dist-packages/nova/volume/driver.py", line 459, in
check_for_export\n    tid =
self.db.volume_get_iscsi_target_num(context, volume_id)\n', u'  File
"/usr/lib/python2.7/dist-packages/nova/db/api.py", line 983, in
volume_get_iscsi_target_num\n    return
IMPL.volume_get_iscsi_target_num(context, volume_id)\n', u'  File
"/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.py", line
102, in wrapper\n    return f(*args, **kwargs)\n', u'  File
"/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.py", line
2455, in volume_get_iscsi_target_num\n    raise
exception.ISCSITargetNotFoundForVolume(volume_id=volume_id)\n',
u'ISCSITargetNotFoundForVolume: No target id found for volume
130.\n'].

What's interesting here is that this KVM instance is only running RBD
volumes, no iSCSI volumes in sight. Here's the drives as they're set
up by openstack in the kvm command:

-drive file=rbd:nova/volume-00000082:id=rbd:key=<...deleted my
key...>==:auth_supported=cephx
none,if=none,id=drive-virtio-disk0,format=raw,cache=none -device
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
-drive file=rbd:nova/volume-00000083:id=rbd:key=<...deleted my
key...>==:auth_supported=cephx
none,if=none,id=drive-virtio-disk1,format=raw,cache=none -device
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk1,id=virtio-disk1

But before banging my head against it more (and chastised after
banging my head against the fiemap issue, which turned out to be an
actual bug), I figured I'd see if it was even possible. In Sebastien
Han's (completely fantastic) Ceph+Openstack article, it doesn't sound
like he was able to get RBD-based migration working either. Again, I
don't mind debugging further but wanted to make sure I wasn't
debugging something that wasn't actually there. Incidentally, if I put
something like this in
/usr/lib/python2.7/dist-packages/nova/volume/manager.py in the
for-loop

if not volume[ "iscsi_target" ]: continue

before the call to self.driver.check_for_export(), then live migration
*seems* to work, but obviously I might be setting up some ugly corner
case.

If live migration isn't currently possible for RBD, anyone know if
it's in a roadmap?


Follow ups