yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #96100
[Bug 2054446] Re: [SRU] Boot from ISO does not work
oracular is about to be EOL
** Changed in: nova (Ubuntu Oracular)
Status: In Progress => Won't Fix
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2054446
Title:
[SRU] Boot from ISO does not work
Status in Ubuntu Cloud Archive:
Fix Released
Status in Ubuntu Cloud Archive bobcat series:
New
Status in Ubuntu Cloud Archive caracal series:
New
Status in Ubuntu Cloud Archive dalmatian series:
New
Status in Ubuntu Cloud Archive epoxy series:
Fix Released
Status in OpenStack Compute (nova):
Fix Released
Status in nova package in Ubuntu:
Fix Released
Status in nova source package in Noble:
In Progress
Status in nova source package in Oracular:
Won't Fix
Status in nova source package in Plucky:
Fix Released
Bug description:
[Impact]
Pure ISO image cannot be booted even though
CVEs(https://review.opendev.org/c/openstack/ossa/+/923301/1/ossa/OSSA-2024-001.yaml)
are already supported.
[Test Plan]
To verify this functionality manually, you can follow these steps (see
also comment https://bugs.launchpad.net/cloud-
archive/+bug/2054446/comments/21 for more details):
1. Upload a pure iso image (eg: a veeam server recovery iso for
windows mentioned in the comment
https://bugs.launchpad.net/nova/+bug/2054446/comments/8) using the '--
disk-format iso' option:
openstack image create --public --file ~/images/veeamwindows.iso
--disk-format iso veeamwindows.iso
2, Boot an VM using this pure iso image:
openstack server create --wait --image veeamwindows.iso --flavor
m1.small --key-name mykey --nic net-id=$(openstack network show
private -fvalue -cid) i1
3, Check the libvirt xml configuration of the VM to confirm that the
disk device type is set to cdrom:
$ sudo virsh dumpxml 1 |grep '<disk'
<disk type='file' device='cdrom'>
4, Optionally, access the vnc console to further verify that the iso
boots correctly.
[Where problems could occur]
ISO+MBR/GPT multiple images can be booted in disk mode with CVE
detection support. However, a pure ISO single image can only be booted
in cdrom format. cdrom mode remains unsupported without this fix even
if CVEs are already supported since Bobcat release.
There are two patches involved: the deepcopy patch
(https://review.opendev.org/c/openstack/nova/+/920374) and the iso
patch (https://review.opendev.org/c/openstack/nova/+/909611).
The deepcopy patch primarily ensures that copy.deepcopy() is used when
handling BlockDeviceMapping objects. If a regression occurs, whether
in BlockDeviceMapping or DriverBlockDevice logic, these cases are
already covered by unit tests, so any regression issues should be
reported by autopkgtests.
The iso patch mainly adds support for booting pure iso images in cdrom
mode. If a regression occurs, this functionality may no longer work as
expected. In such cases, you can refer to the steps in the [Test Plan]
section to verify it.
[Others]
The deepcopy patch has been introduced since 30.0.0, and the iso patch
since 31.0.0
These fixes are already present in the upstream main, epoxy(2025.1),
they need to be backported to dalmatian(2024.2), caracal(2024.1),
bobcat(2023.2)
Original Bug Description Below
===========
It may be https://bugs.launchpad.net/nova/+bug/1454901 resurfacing
again..
Symptoms using fresh DevStack/master:
I follow the docs https://docs.openstack.org/nova/latest/user/launch-instance-using-ISO-image.html
and using tinycore iso for testing http://tinycorelinux.net/ (this is very small liveCD ISO) to speed up testing.
Image is created with
openstack image create --public --file Core-14.0.iso --disk-format iso
Core-14.0.iso
Then I boot the instance as usual
openstack --os-compute-api-version 2.latest server create --image
Core-14.0.iso --flavor cirros256 --no-network iso-test
The instance is ACTIVE, but when I connect to it via noVNC it shows
that it failed to boot - "No bootable device"
The relevant part of the instance XML is
<devices>
<emulator>/usr/bin/qemu-system-x86_64</emulator>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2' cache='none'/>
<source file='/opt/stack/data/nova/instances/0c3d31a9-7ecf-4625-9e0e-4cd89d1b76c2/disk' index='1'/>
<backingStore type='file' index='2'>
<format type='raw'/>
<source file='/opt/stack/data/nova/instances/_base/b2b7eb374cc75a24d0e5ba0eca7ca9cc1e6dc7c6'/>
<backingStore/>
</backingStore>
<target dev='vda' bus='virtio'/>
<alias name='virtio-disk0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</disk>
This is Nova/master + libvirt 8.0
I checked the same on OpenStack Antelope - the result is the same.
However, on OpenStack Queens (+ libvirt 4.0) the instance boots from
the same ISO image uploaded to Glance just fine! The relevant part of
libvirt domain XML in OpenStack Queens is
<devices>
<emulator>/usr/bin/kvm-spice</emulator>
<disk type='file' device='cdrom'>
<driver name='qemu' type='qcow2' cache='none'/>
<source file='/var/lib/nova/instances/791cf357-02e6-4a7f-9310-25f5e79cf27d/disk'/>
<backingStore type='file' index='1'>
<format type='raw'/>
<source file='/var/lib/nova/instances/_base/d646a5bfc2ce7e3926d0f368b8adb975b245cfd2'/>
<backingStore/>
</backingStore>
<target dev='hda' bus='ide'/>
<readonly/>
<alias name='ide0-0-0'/>
<address type='drive' controller='0' bus='0' target='0' unit='0'/>
</disk>
Notice the difference in disk device and target/address/alias. When I
manually edited the XML on devstack to look like that from Queens, the
instance booted successfully.
This looks like a regression somewhere in Nova (libvirt driver?)
To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/2054446/+subscriptions
References