yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #78323
[Bug 1803961] Re: Nova doesn't call migrate_volume_completion after cinder volume migration
Reviewed: https://review.opendev.org/637224
Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=53c3cfa7a02d684ce27800e22e00a816af44c510
Submitter: Zuul
Branch: master
commit 53c3cfa7a02d684ce27800e22e00a816af44c510
Author: Lee Yarwood <lyarwood@xxxxxxxxxx>
Date: Fri Feb 15 16:26:23 2019 +0000
Use migration_status during volume migrating and retyping
When swapping volumes Nova has to identify if the swap itself is related
to an underlying migration or retype of the volume by Cinder. Nova
would previously use the status of the volume to determine if the volume
was retyping or migrating.
However in the migration case where a volume is moved directly between
hosts the volume is never given a status of migrating by Cinder leading
to Nova never calling the os-migrate_volume_completion cinder API to
complete the migration.
This change switches Nova to use the migration_status of the volume to
ensure that this API is called for both retypes and migrations.
Depends-On: https://review.openstack.org/#/c/639331/
Change-Id: I1bdf3431bda2da98380e0dcaa9f952e6768ca3af
Closes-bug: #1803961
** Changed in: nova
Status: In Progress => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1803961
Title:
Nova doesn't call migrate_volume_completion after cinder volume
migration
Status in OpenStack Compute (nova):
Fix Released
Bug description:
Originally reported in Red Hat Bugzilla:
https://bugzilla.redhat.com/show_bug.cgi?id=1648931
Create a cinder volume, attach it to a nova instance, and migrate the
volume to a different storage host:
$ cinder create 1 --volume-type foo --name myvol
$ nova volume-attach myinstance myvol
$ cinder migrate myvol c-vol2
Everything seems to work correctly, but if we look at myinstance we
see that it's now connected to a new volume, and the original volume
is still present on the original storage host.
This is because nova didn't call cinder's migrate_volume_completion.
migrate_volume_completion would have deleted the original volume, and
changed the volume id of the new volume to be the same as the
original. The result would be that myinstance would appear to be
connected to the same volume as before.
Note that there are 2 ways (that I'm aware of) to intiate a cinder
volume migration: retype and migrate. AFAICT retype is *not* affected.
In fact, I updated the relevant tempest test to try to trip it up and
it didn't fail. However, an exlicit migrate *is* affected. They are
different top-level entry points in cinder, and set different state,
which is what triggers the Nova bug.
This appears to be a regression which was introduced by
https://review.openstack.org/#/c/456971/ :
# Yes this is a tightly-coupled state check of what's going on inside
# cinder, but we need this while we still support old (v1/v2) and
# new style attachments (v3.44). Once we drop support for old style
# attachments we could think about cleaning up the cinder-initiated
# swap volume API flows.
is_cinder_migration = (
True if old_volume['status'] in ('retyping',
'migrating') else False)
There's a bug here because AFAICT cinder never sets status to
'migrating' during any operation: it sets migration_status to
'migrating' during both retype and migrate. During retype it sets
status to 'retyping', but not during an explicit migrate.
To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1803961/+subscriptions
References