yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #69928
[Bug 1715569] Re: Live migration fails with an attached non-bootable Cinder volume (Pike)
Reviewed: https://review.openstack.org/518022
Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=b196857f04e41dde294eaacc2c1a991807ecc829
Submitter: Zuul
Branch: master
commit b196857f04e41dde294eaacc2c1a991807ecc829
Author: Mike Lowe <jomlowe@xxxxxx>
Date: Mon Nov 6 11:06:46 2017 -0500
live-mig: keep disk device address same
During live migration disk devices are updated with the latest
block device mapping information for volumes. Previously this
relied on libvirt to assign addresses in order after the already
assigned devices like the root disk had been accounted for. In
the latest libvirt the unassigned devices are allocated first which
makes the root disk address double allocated causing the migration to
fail. A running instance should never have the hardware addresses
of its disks changed mid flight. While disk address changes during
live migration produce fatal errors for the operator it would likely
cause errors inside the instance and unexpected behavior if the device
addresses change during cold migrationt review. With this disk addresses are no
longer updated with block device mapping information while every
other element of the disk definition for a volume is updated.
Closes-Bug: 1715569
Change-Id: I17af9848f4c0edcbcb101b30e45ca4afa93dcdbb
** Changed in: nova
Status: In Progress => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1715569
Title:
Live migration fails with an attached non-bootable Cinder volume
(Pike)
Status in OpenStack Compute (nova):
Fix Released
Status in OpenStack Compute (nova) ocata series:
New
Status in OpenStack Compute (nova) pike series:
New
Bug description:
I have setup a fresh OpenStack HA Pike environment on Ubuntu 16.04.3.
Live migration has been enabled and works so far. As storage backend I
am using Ceph v12.2.0 (Luminous). When you attach a secondary volume
to a vm and try to live migrate the vm to another host, it fails with
the following exception:
2017-09-07 08:30:46.621 3246 ERROR nova.virt.libvirt.driver [req-832fc6c9-4d8e-46f8-94be-95cb5c7a4114 dddfba8e02f746799a6408a523e6cd25 ed2d2efd86dd40e7a45491d8502318d3 - default default] [instance: 414e8bc1-0b85-4e7f-8897-74b416b9caf8] Live Migration failure: unsupported configuration: Target device drive address 0:0:0 does not match source 0:0:1: libvirtError: unsupported configuration: Target device drive address 0:0:0 does not match source 0:0:1
2017-09-07 08:30:47.293 3246 ERROR nova.virt.libvirt.driver [req-832fc6c9-4d8e-46f8-94be-95cb5c7a4114 dddfba8e02f746799a6408a523e6cd25 ed2d2efd86dd40e7a45491d8502318d3 - default default] [instance: 414e8bc1-0b85-4e7f-8897-74b416b9caf8] Migration operation has aborted
When you (not live) migrate the corresponding vm with the attached
volume, migration succeeds. When you launch the vm from a bootable
volume, migration and live migration succeeds in both cases. Only live
migration with an additional attached volume fails. Because of Ceph
RDB volumes, migration does not require "Block migration".
Compute node:
$ pip list | grep -E 'nova|cinder'
nova (16.0.0)
python-cinderclient (3.1.0)
python-novaclient (9.1.0)
Controller node:
$ pip list | grep -E 'nova|cinder'
cinder (11.0.0)
nova (16.0.0)
python-cinderclient (3.1.0)
python-novaclient (9.1.0)
$ ceph --version
ceph version 12.2.0 (32ce2a3ae5239ee33d6150705cdb24d43bab910c) luminous (rc)
Is this a normal behaviour?
To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1715569/+subscriptions
References