yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #96295
[Bug 2115870] Re: instance live migration failed after retype instance
Thank you for submitting this bug report.
The OpenStack version reported, Wallaby, is no longer a supported
release. You can find the list of currently supported releases here:
https://releases.openstack.org/
For this reason, we are marking this bug as 'Invalid'.
If you still believe this is a Nova bug and you can reproduce it on a supported OpenStack version, please feel free to update this report with the necessary details (referencing our bug reporting template: https://wiki.openstack.org/wiki/Nova/BugsTeam/BugReportTemplate) and set its status back to 'New'.
** Changed in: nova
Status: New => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2115870
Title:
instance live migration failed after retype instance
Status in OpenStack Compute (nova):
Invalid
Bug description:
Description:
Use cinder retype api to migrate instance's block storage when instance is running, nova call libvirt blockcopy to migrate block device.
After blockcopy operation completed, then live migrate instance. Live migraiton failed.
Steps to reproduce:
Cinder service have two volume types, such as volume_type1 and volume_type2.
1.We create a BFV instance with volume_type1 as root volume type.
2.use cinder retype to migrate instance root disk to volume_type2.
# cinder retype --migration-policy on-demand volume_id volume_type2
3.Live migrate instance to other node after retype operation completed.
# nova live-migration instance_uuid host
Expected result:
Instance live migration operation success.
Actual result:
Instance live migration operation failed, and source nova-compute node log as below:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/eventlet/hubs/hub.py", line 476, in fire_timers
timer()
File "/usr/local/lib/python3.6/site-packages/eventlet/hubs/timer.py", line 59, in __call__
cb(*args, **kw)
File "/usr/local/lib/python3.6/site-packages/eventlet/event.py", line 175, in _do_send
waiter.switch(result)
File "/usr/local/lib/python3.6/site-packages/eventlet/greenthread.py", line 221, in main
result = function(*args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/nova/utils.py", line 669, in context_wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", line 9971, in _live_migration_operation
LOG.error("Live Migration failure: %s", e, instance=instance)
File "/usr/local/lib/python3.6/site-packages/oslo_utils/excutils.py", line 227, in __exit__
self.force_reraise()
File "/usr/local/lib/python3.6/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
raise self.value
File "/usr/local/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", line 9964, in _live_migration_operation
auto_converge_increment=auto_converge_increment)
File "/usr/local/lib/python3.6/site-packages/nova/virt/libvirt/guest.py", line 639, in migrate
destination, params=params, flags=flags)
File "/usr/local/lib/python3.6/site-packages/eventlet/tpool.py", line 190, in doit
result = proxy_call(self._autowrap, f, *args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/eventlet/tpool.py", line 148, in proxy_call
rv = execute(f, *args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/eventlet/tpool.py", line 129, in execute
six.reraise(c, e, tb)
File "/usr/local/lib/python3.6/site-packages/six.py", line 703, in reraise
raise value
File "/usr/local/lib/python3.6/site-packages/eventlet/tpool.py", line 83, in tworker
rv = meth(*args, **kwargs)
File "/usr/local/lib64/python3.6/site-packages/libvirt.py", line 2099, in migrateToURI3
raise libvirtError('virDomainMigrateToURI3() failed')
libvirt.libvirtError: operation failed: migration out job: unexpectedly failed
Libvirt report error:
2025-07-02T09:36:43.146484Z qemu-kvm: qemu_savevm_state_complete_precopy_non_iterable: bdrv_inactivate_all() failed (-1)
Environment
ARM architecture
Instance booted with UEFI.
Nova version: Wallaby
To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/2115870/+subscriptions
References