yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #78325
[Bug 1803961] Re: Nova doesn't call migrate_volume_completion after cinder volume migration
** Changed in: nova
Importance: Undecided => Medium
** Also affects: nova/rocky
Importance: Undecided
Status: New
** Also affects: nova/queens
Importance: Undecided
Status: New
** Also affects: nova/stein
Importance: Undecided
Status: New
** Changed in: nova/queens
Status: New => In Progress
** Changed in: nova/rocky
Status: New => In Progress
** Changed in: nova/stein
Status: New => In Progress
** Changed in: nova/stein
Importance: Undecided => Medium
** Changed in: nova/rocky
Importance: Undecided => Medium
** Changed in: nova/queens
Importance: Undecided => Medium
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1803961
Title:
Nova doesn't call migrate_volume_completion after cinder volume
migration
Status in OpenStack Compute (nova):
Fix Released
Status in OpenStack Compute (nova) queens series:
In Progress
Status in OpenStack Compute (nova) rocky series:
In Progress
Status in OpenStack Compute (nova) stein series:
In Progress
Bug description:
Originally reported in Red Hat Bugzilla:
https://bugzilla.redhat.com/show_bug.cgi?id=1648931
Create a cinder volume, attach it to a nova instance, and migrate the
volume to a different storage host:
$ cinder create 1 --volume-type foo --name myvol
$ nova volume-attach myinstance myvol
$ cinder migrate myvol c-vol2
Everything seems to work correctly, but if we look at myinstance we
see that it's now connected to a new volume, and the original volume
is still present on the original storage host.
This is because nova didn't call cinder's migrate_volume_completion.
migrate_volume_completion would have deleted the original volume, and
changed the volume id of the new volume to be the same as the
original. The result would be that myinstance would appear to be
connected to the same volume as before.
Note that there are 2 ways (that I'm aware of) to intiate a cinder
volume migration: retype and migrate. AFAICT retype is *not* affected.
In fact, I updated the relevant tempest test to try to trip it up and
it didn't fail. However, an exlicit migrate *is* affected. They are
different top-level entry points in cinder, and set different state,
which is what triggers the Nova bug.
This appears to be a regression which was introduced by
https://review.openstack.org/#/c/456971/ :
# Yes this is a tightly-coupled state check of what's going on inside
# cinder, but we need this while we still support old (v1/v2) and
# new style attachments (v3.44). Once we drop support for old style
# attachments we could think about cleaning up the cinder-initiated
# swap volume API flows.
is_cinder_migration = (
True if old_volume['status'] in ('retyping',
'migrating') else False)
There's a bug here because AFAICT cinder never sets status to
'migrating' during any operation: it sets migration_status to
'migrating' during both retype and migrate. During retype it sets
status to 'retyping', but not during an explicit migrate.
To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1803961/+subscriptions
References