yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #10051
[Bug 1282643] [NEW] block/live migration doesn't work with LVM as libvirt storage
Public bug reported:
## What we did:
We were trying to use block migration in a setup that use LVM as libvirt
storage:
nova live-migrate --block-migrate <uuid> <host-name>
## Current Result:
Nothing happens, no migration, but in libvirtd.log of the destination
hypervisor we saw:
error : virNetClientProgramDispatchError:175 : Failed to open file
'/dev/instances/instance-0000015f_disk': No such file or directory
the /dev/instances/instance-0000015f_disk is the root disk of our
instance.
## What we found:
After a bit of wondering in the code of nova, we saw that nova in the
destination host actually fails to create the instance resources. This
should have been done as part of pre_live_migration RPC call, but this
one doesn't receive any disks in the disk_info argument
(https://github.com/openstack/nova/blob/stable/havana/nova/virt/libvirt/driver.py#L4132)
except the config disk. We found that this due to the fact that LVM
disks (e.g. root disk) are skipped by driver.get_instance_disk_info
method, specially by this line
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L4585-L4587,
which skip any disk that is not a file thinking that it must be a block
storage which not true because LVM disk are created as a block type
(https://github.com/openstack/nova/blob/stable/havana/nova/virt/libvirt/imagebackend.py#L358),
snippets for the libvirt.xml below:
<devices>
<disk type="block" device="disk">
<driver name="qemu" type="raw" cache="none"/>
<source dev="/dev/instances/instance-00000163_disk"/>
<target bus="virtio" dev="vda"/>
</disk>
<disk type="file" device="cdrom">
<driver name="qemu" type="raw" cache="none"/>
<source file="/var/lib/nova/instances/6ed79840-c850-498f-9607-ffa92e7cf944/disk.config"/>
<target bus="ide" dev="hdd"/>
</disk>
<interface type="bridge">
<mac address="fa:16:3e:f0:61:24"/>
<model type="virtio"/>
<source bridge="brqe914da2f-c4"/>
<target dev="tap258425f6-9b"/>
</interface>
<serial type="file">
<source path="/var/lib/nova/instances/6ed79840-c850-498f-9607-ffa92e7cf944/console.log"/>
</serial>
<serial type="pty"/>
<input type="tablet" bus="usb"/>
<graphics type="vnc" autoport="yes" keymap="en-us" listen="0.0.0.0"/>
</devices>
** Affects: nova
Importance: Undecided
Status: New
** Description changed:
- ## What we did
+ ## What we did:
We were trying to use block migration in a setup that use LVM as libvirt
storage:
nova live-migrate --block <uuid> <host-name>
## Current Result:
Nothing happen,no migration, but in libvirtd.log of the destination
hypervisor we saw:
- error : virNetClientProgramDispatchError:175 : Failed to open file
+ error : virNetClientProgramDispatchError:175 : Failed to open file
'/dev/instances/instance-0000015f_disk': No such file or directory
-
- the /dev/instances/instance-0000015f_disk is the root disk of our instance.
+ the /dev/instances/instance-0000015f_disk is the root disk of our
+ instance.
## What we found:
After a bit of wondering in the code of nova, we saw that nova in the
destination host actually fail to create instance resources, this should
have been done as part of pre_live_migration RPC call, but this one
doesn't receive any disks in the disk_info argument
(https://github.com/openstack/nova/blob/stable/havana/nova/virt/libvirt/driver.py#L4132)
except the config disk, and we found that this due to the fact that LVM
disks (e.g. root disk) are skipped by driver.get_instance_disk_info
method, specially by this line
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L4585-L4587,
which skip any disk that is not a file thinking that it must be a block
storage which not true because LVM disk are created as a block type
(https://github.com/openstack/nova/blob/stable/havana/nova/virt/libvirt/imagebackend.py#L358),
snippets for the libvirt.xml below:
- <devices>
- <disk type="block" device="disk">
- <driver name="qemu" type="raw" cache="none"/>
- <source dev="/dev/instances/instance-00000163_disk"/>
- <target bus="virtio" dev="vda"/>
- </disk>
- <disk type="file" device="cdrom">
- <driver name="qemu" type="raw" cache="none"/>
- <source file="/var/lib/nova/instances/6ed79840-c850-498f-9607-ffa92e7cf944/disk.config"/>
- <target bus="ide" dev="hdd"/>
- </disk>
- <interface type="bridge">
- <mac address="fa:16:3e:f0:61:24"/>
- <model type="virtio"/>
- <source bridge="brqe914da2f-c4"/>
- <target dev="tap258425f6-9b"/>
- </interface>
- <serial type="file">
- <source path="/var/lib/nova/instances/6ed79840-c850-498f-9607-ffa92e7cf944/console.log"/>
- </serial>
- <serial type="pty"/>
- <input type="tablet" bus="usb"/>
- <graphics type="vnc" autoport="yes" keymap="en-us" listen="0.0.0.0"/>
- </devices>
+ <devices>
+ <disk type="block" device="disk">
+ <driver name="qemu" type="raw" cache="none"/>
+ <source dev="/dev/instances/instance-00000163_disk"/>
+ <target bus="virtio" dev="vda"/>
+ </disk>
+ <disk type="file" device="cdrom">
+ <driver name="qemu" type="raw" cache="none"/>
+ <source file="/var/lib/nova/instances/6ed79840-c850-498f-9607-ffa92e7cf944/disk.config"/>
+ <target bus="ide" dev="hdd"/>
+ </disk>
+ <interface type="bridge">
+ <mac address="fa:16:3e:f0:61:24"/>
+ <model type="virtio"/>
+ <source bridge="brqe914da2f-c4"/>
+ <target dev="tap258425f6-9b"/>
+ </interface>
+ <serial type="file">
+ <source path="/var/lib/nova/instances/6ed79840-c850-498f-9607-ffa92e7cf944/console.log"/>
+ </serial>
+ <serial type="pty"/>
+ <input type="tablet" bus="usb"/>
+ <graphics type="vnc" autoport="yes" keymap="en-us" listen="0.0.0.0"/>
+ </devices>
** Description changed:
## What we did:
We were trying to use block migration in a setup that use LVM as libvirt
storage:
- nova live-migrate --block <uuid> <host-name>
+ nova live-migrate --block-migrate <uuid> <host-name>
## Current Result:
Nothing happen,no migration, but in libvirtd.log of the destination
hypervisor we saw:
error : virNetClientProgramDispatchError:175 : Failed to open file
'/dev/instances/instance-0000015f_disk': No such file or directory
the /dev/instances/instance-0000015f_disk is the root disk of our
instance.
## What we found:
After a bit of wondering in the code of nova, we saw that nova in the
destination host actually fail to create instance resources, this should
have been done as part of pre_live_migration RPC call, but this one
doesn't receive any disks in the disk_info argument
(https://github.com/openstack/nova/blob/stable/havana/nova/virt/libvirt/driver.py#L4132)
except the config disk, and we found that this due to the fact that LVM
disks (e.g. root disk) are skipped by driver.get_instance_disk_info
method, specially by this line
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L4585-L4587,
which skip any disk that is not a file thinking that it must be a block
storage which not true because LVM disk are created as a block type
(https://github.com/openstack/nova/blob/stable/havana/nova/virt/libvirt/imagebackend.py#L358),
snippets for the libvirt.xml below:
<devices>
<disk type="block" device="disk">
<driver name="qemu" type="raw" cache="none"/>
<source dev="/dev/instances/instance-00000163_disk"/>
<target bus="virtio" dev="vda"/>
</disk>
<disk type="file" device="cdrom">
<driver name="qemu" type="raw" cache="none"/>
<source file="/var/lib/nova/instances/6ed79840-c850-498f-9607-ffa92e7cf944/disk.config"/>
<target bus="ide" dev="hdd"/>
</disk>
<interface type="bridge">
<mac address="fa:16:3e:f0:61:24"/>
<model type="virtio"/>
<source bridge="brqe914da2f-c4"/>
<target dev="tap258425f6-9b"/>
</interface>
<serial type="file">
<source path="/var/lib/nova/instances/6ed79840-c850-498f-9607-ffa92e7cf944/console.log"/>
</serial>
<serial type="pty"/>
<input type="tablet" bus="usb"/>
<graphics type="vnc" autoport="yes" keymap="en-us" listen="0.0.0.0"/>
</devices>
** Description changed:
## What we did:
We were trying to use block migration in a setup that use LVM as libvirt
storage:
nova live-migrate --block-migrate <uuid> <host-name>
## Current Result:
- Nothing happen,no migration, but in libvirtd.log of the destination
+ Nothing happens, no migration, but in libvirtd.log of the destination
hypervisor we saw:
error : virNetClientProgramDispatchError:175 : Failed to open file
'/dev/instances/instance-0000015f_disk': No such file or directory
the /dev/instances/instance-0000015f_disk is the root disk of our
instance.
## What we found:
After a bit of wondering in the code of nova, we saw that nova in the
- destination host actually fail to create instance resources, this should
- have been done as part of pre_live_migration RPC call, but this one
- doesn't receive any disks in the disk_info argument
+ destination host actually fails to create the instance resources. This
+ should have been done as part of pre_live_migration RPC call, but this
+ one doesn't receive any disks in the disk_info argument
(https://github.com/openstack/nova/blob/stable/havana/nova/virt/libvirt/driver.py#L4132)
- except the config disk, and we found that this due to the fact that LVM
+ except the config disk. We found that this due to the fact that LVM
disks (e.g. root disk) are skipped by driver.get_instance_disk_info
method, specially by this line
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L4585-L4587,
which skip any disk that is not a file thinking that it must be a block
storage which not true because LVM disk are created as a block type
(https://github.com/openstack/nova/blob/stable/havana/nova/virt/libvirt/imagebackend.py#L358),
snippets for the libvirt.xml below:
<devices>
<disk type="block" device="disk">
<driver name="qemu" type="raw" cache="none"/>
<source dev="/dev/instances/instance-00000163_disk"/>
<target bus="virtio" dev="vda"/>
</disk>
<disk type="file" device="cdrom">
<driver name="qemu" type="raw" cache="none"/>
<source file="/var/lib/nova/instances/6ed79840-c850-498f-9607-ffa92e7cf944/disk.config"/>
<target bus="ide" dev="hdd"/>
</disk>
<interface type="bridge">
<mac address="fa:16:3e:f0:61:24"/>
<model type="virtio"/>
<source bridge="brqe914da2f-c4"/>
<target dev="tap258425f6-9b"/>
</interface>
<serial type="file">
<source path="/var/lib/nova/instances/6ed79840-c850-498f-9607-ffa92e7cf944/console.log"/>
</serial>
<serial type="pty"/>
<input type="tablet" bus="usb"/>
<graphics type="vnc" autoport="yes" keymap="en-us" listen="0.0.0.0"/>
</devices>
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1282643
Title:
block/live migration doesn't work with LVM as libvirt storage
Status in OpenStack Compute (Nova):
New
Bug description:
## What we did:
We were trying to use block migration in a setup that use LVM as
libvirt storage:
nova live-migrate --block-migrate <uuid> <host-name>
## Current Result:
Nothing happens, no migration, but in libvirtd.log of the destination
hypervisor we saw:
error : virNetClientProgramDispatchError:175 : Failed to open file
'/dev/instances/instance-0000015f_disk': No such file or directory
the /dev/instances/instance-0000015f_disk is the root disk of our
instance.
## What we found:
After a bit of wondering in the code of nova, we saw that nova in the
destination host actually fails to create the instance resources. This
should have been done as part of pre_live_migration RPC call, but this
one doesn't receive any disks in the disk_info argument
(https://github.com/openstack/nova/blob/stable/havana/nova/virt/libvirt/driver.py#L4132)
except the config disk. We found that this due to the fact that LVM
disks (e.g. root disk) are skipped by driver.get_instance_disk_info
method, specially by this line
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L4585-L4587,
which skip any disk that is not a file thinking that it must be a
block storage which not true because LVM disk are created as a block
type
(https://github.com/openstack/nova/blob/stable/havana/nova/virt/libvirt/imagebackend.py#L358),
snippets for the libvirt.xml below:
<devices>
<disk type="block" device="disk">
<driver name="qemu" type="raw" cache="none"/>
<source dev="/dev/instances/instance-00000163_disk"/>
<target bus="virtio" dev="vda"/>
</disk>
<disk type="file" device="cdrom">
<driver name="qemu" type="raw" cache="none"/>
<source file="/var/lib/nova/instances/6ed79840-c850-498f-9607-ffa92e7cf944/disk.config"/>
<target bus="ide" dev="hdd"/>
</disk>
<interface type="bridge">
<mac address="fa:16:3e:f0:61:24"/>
<model type="virtio"/>
<source bridge="brqe914da2f-c4"/>
<target dev="tap258425f6-9b"/>
</interface>
<serial type="file">
<source path="/var/lib/nova/instances/6ed79840-c850-498f-9607-ffa92e7cf944/console.log"/>
</serial>
<serial type="pty"/>
<input type="tablet" bus="usb"/>
<graphics type="vnc" autoport="yes" keymap="en-us" listen="0.0.0.0"/>
</devices>
To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1282643/+subscriptions
Follow ups
References