yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #69688
[Bug 1715569] Re: Live migration fails with an attached non-bootable Cinder volume (Pike)
** Also affects: nova/ocata
Importance: Undecided
Status: New
** Also affects: nova/pike
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1715569
Title:
Live migration fails with an attached non-bootable Cinder volume
(Pike)
Status in OpenStack Compute (nova):
In Progress
Status in OpenStack Compute (nova) ocata series:
New
Status in OpenStack Compute (nova) pike series:
New
Bug description:
I have setup a fresh OpenStack HA Pike environment on Ubuntu 16.04.3.
Live migration has been enabled and works so far. As storage backend I
am using Ceph v12.2.0 (Luminous). When you attach a secondary volume
to a vm and try to live migrate the vm to another host, it fails with
the following exception:
2017-09-07 08:30:46.621 3246 ERROR nova.virt.libvirt.driver [req-832fc6c9-4d8e-46f8-94be-95cb5c7a4114 dddfba8e02f746799a6408a523e6cd25 ed2d2efd86dd40e7a45491d8502318d3 - default default] [instance: 414e8bc1-0b85-4e7f-8897-74b416b9caf8] Live Migration failure: unsupported configuration: Target device drive address 0:0:0 does not match source 0:0:1: libvirtError: unsupported configuration: Target device drive address 0:0:0 does not match source 0:0:1
2017-09-07 08:30:47.293 3246 ERROR nova.virt.libvirt.driver [req-832fc6c9-4d8e-46f8-94be-95cb5c7a4114 dddfba8e02f746799a6408a523e6cd25 ed2d2efd86dd40e7a45491d8502318d3 - default default] [instance: 414e8bc1-0b85-4e7f-8897-74b416b9caf8] Migration operation has aborted
When you (not live) migrate the corresponding vm with the attached
volume, migration succeeds. When you launch the vm from a bootable
volume, migration and live migration succeeds in both cases. Only live
migration with an additional attached volume fails. Because of Ceph
RDB volumes, migration does not require "Block migration".
Compute node:
$ pip list | grep -E 'nova|cinder'
nova (16.0.0)
python-cinderclient (3.1.0)
python-novaclient (9.1.0)
Controller node:
$ pip list | grep -E 'nova|cinder'
cinder (11.0.0)
nova (16.0.0)
python-cinderclient (3.1.0)
python-novaclient (9.1.0)
$ ceph --version
ceph version 12.2.0 (32ce2a3ae5239ee33d6150705cdb24d43bab910c) luminous (rc)
Is this a normal behaviour?
To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1715569/+subscriptions
References