yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #51925
[Bug 1582684] Re: nova kilo->liberty ceph configdrive upgrade fails
Reviewed: https://review.openstack.org/317785
Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=f5c9ebd56075f8eb04f9f0e683f85bacdcd68c38
Submitter: Jenkins
Branch: master
commit f5c9ebd56075f8eb04f9f0e683f85bacdcd68c38
Author: melanie witt <melwittt@xxxxxxxxx>
Date: Wed May 18 00:09:18 2016 +0000
Fall back to flat config drive if not found in rbd
Commit adecf780d3ed4315e4ce305cb1821d493650494b added support for
storing config drives in rbd. Existing instances however still
have config drives in the instance directory. If an existing
instance is stopped, an attempt to start it again fails because the
guest config is generated assuming a config drive location in rbd.
This adds a fall back to the instance directory in the case of
config drive and rbd if the image is not found in rbd.
Closes-Bug: #1582684
Change-Id: I21107ea0a148b66bee81e57cdce08e3006a60aee
** Changed in: nova
Status: In Progress => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1582684
Title:
nova kilo->liberty ceph configdrive upgrade fails
Status in OpenStack Compute (nova):
Fix Released
Status in OpenStack Compute (nova) liberty series:
In Progress
Status in OpenStack Compute (nova) mitaka series:
In Progress
Bug description:
Using CEPH RBD as our ephemeral drive led to an issue when upgrading
from Kilo to Liberty. Our environment has "force_config_drive = True".
In Icehouse, Juno, and Kilo, this uses an ISO 9660 image created in
/var/lib/nova/instances/$UUID/disk.config
However, in Liberty, if using CEPH RBD for ephemeral, there is a
switch to putting this in rbe like this:
rbd:instances/${UUID}_disk.config
While this works GREAT for new VMs, it is problematic with existing
VMs as not all transition states were considered. In particular, if
you do a
nova stop $UUID
followed by a
nova start $UUID
you will find your instance still in the stopped state. There is
something in the start code that ASSUMES that the new rbd format will
be in place (but it doesn't actually create it.)
There is a work around if you find instances in that state, simply
cold migrate them with
nova migrate $UUID
which redoes the config.drive plumbing and creates the
rbd:instances/${UUID}_disk.config
Our permanent work around has been to prepopulate the rbd via a script
though getting this bug fixed would be much better.
Liberty is a stable release and this is a loss of service type of bug
so should get fixed. Not clear if this is also an issue (likely so) in
Mitaka/Newton as we haven't got an environment yet to test it, but
presumably with long running VMs from early config drive, it would
also exist in Mitaka.
Specifics:
Liberty Nova
nova:12.0.2-38-g7bc3355.13.1b76006
CEPH:
0.94.6-1trusty
Host OS:
Ubuntu Trusty
To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1582684/+subscriptions
References