yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #93594
[Bug 1939869] Re: block migration of config_drive_format=iso9660 doesn't take into account a dedicated live-migration network
*** This bug is a duplicate of bug 1969971 ***
https://bugs.launchpad.net/bugs/1969971
Considering the original description of this bug coming from a charmed
deployment of nova, the bug [1] below I just marked this one as a
duplicate of, the fix for that is very likely to address this one as
well
[1] https://bugs.launchpad.net/charm-nova-cloud-controller/+bug/1969971
** Also affects: charm-nova-cloud-controller
Importance: Undecided
Status: New
** Changed in: charm-nova-cloud-controller
Status: New => Fix Committed
** This bug has been marked a duplicate of bug 1969971
Live migrations failing due to remote host identification change
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1939869
Title:
block migration of config_drive_format=iso9660 doesn't take into
account a dedicated live-migration network
Status in OpenStack Nova Cloud Controller Charm:
Fix Committed
Status in OpenStack Compute (nova):
In Progress
Bug description:
Downstream issue: https://bugs.launchpad.net/charm-nova-
compute/+bug/1939719
How to reproduce:
1. Prepare two underlying networks/subnets for Libvirt+KVM based OpenStack Nova deployment (one network as main, the other as dedicated live-migration network)
2. Distribute SSH public keys under authorized_keys, pre-populate known_hosts (with the live-migration network IP addresses), and make sure StrictHostKeyChecking is NOT "no"
3. Set live_migration_scheme=ssh and live_migration_inbound_addr to the ones in the live-migration network in nova.conf
4. Launch a VM with config-drive (iso9660 as the default)
5. Live-migrate the VM with `openstack server migrate --live-migration --block-migration`
Expected result:
Live migration works
Actual result:
Live migration fails with an error on the *destination* host at the point of:
https://opendev.org/openstack/nova/src/commit/370830e9445c9825d1e34e60cca01fdfe88d5d82/nova/virt/libvirt/driver.py#L10170-L10189
with:
Command: scp -r <source_host_fqdn>:/var/lib/nova/instances/a1cc19a2-2c34-49c7-b85a-bc4a96265fea/disk.config /var/lib/nova/instances/a1cc19a2-2c34-49c7-b85a-bc4a96265fea
Exit code: 1
Stdout: ''
Stderr: 'Host key verification failed.\r\n'
The source host FQDN is used probably the code relies on
instance.host, and it's resolved as an IP address on the main network
instead of the live-migration network. And the IP addresses on the
main network are not on known_hosts so the key verification failed.
Current workaround is either using config_drive_format=vfat or
including IP addresses of the main network in to known_hosts. But
wondered if copying the config drive of iso9660 can be invoked on the
source side as pushing the data using live_migration_inbound_addr
instead of pulling the data from the destination host.
To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-nova-cloud-controller/+bug/1939869/+subscriptions
References