yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #78593
[Bug 1827492] Re: vms failed to hard reboot and became error after set force_config_drive in compute nodes
Reviewed: https://review.opendev.org/659703
Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=2af89cfea0531ab75b4f5765bb220073f42662d0
Submitter: Zuul
Branch: master
commit 2af89cfea0531ab75b4f5765bb220073f42662d0
Author: pandatt <guojy8993@xxxxxxx>
Date: Thu May 16 02:50:57 2019 +0800
Skip existing VMs when hosts apply force_config_drive
When hosts apply config `CONF.force_config_drive=True`, the existing
VMs shouldn't be enforced to must have config drive device. For they
may have been cloud-inited via metadata service, and may not need and
have any config drive device ever. In contrast, the newly being-built
ones should. Instance attr `launched_at` serves as an apparent flag
to distinguish the two kinds of VMs.
When hard reboots happend, existing VMs skip config drive enforcement,
and therefore avoid hitting 'No such file or directory (config drive
device)' error.
Change-Id: I0558ece92f8657c2f6294e07965c619eb7c8dfcf
Closes-Bug: #1827492
** Changed in: nova
Status: In Progress => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1827492
Title:
vms failed to hard reboot and became error after set
force_config_drive in compute nodes
Status in OpenStack Compute (nova):
Fix Released
Bug description:
Description
===========
Hi guys, i ran into a problem in our POC cluster:
At first, our cluster is configured to use ONLY metadata service
to assist cloud init, and all KVM instances are not configureed
with `--config-drive` opt.
Later, for the reason of verification on how to inject metadata
and network configuration in pure L2 tenant network where DHCP/
L3 services not allowed (and therefore, metadata service not ava
-ilable either), we configured all the compute nodes with `force
_config_drive=true` opt. Then i noticed the problems:
a. poweroffed instances cannot poweron
b. active instances failed to hard reboot, stuck powering-on and
finally became error.
After inspecting the compute log, i believe that this case is not
taken into account when nova-compute re-generate virt XML, define
and start instances.
Steps to reproduce
==================
1. boot new instance without `--config-drive` opt to certain compute host:
# nova boot --flavor 512-1-1 --image cirros \
--nic net-id=d9897882-607a-47ba-8b28-91043a5c2d58 POC
2. configure the compute host with `force_config_drive` opt and restart
the nova-compute service or service container(if kolla used).
3. shutoff the instance `POC`
# nova stop <UUID of instance `POC`>
4. start the instance `POC`
# nova start <UUID of instance `POC`>
5. hard reboot the instance `POC`
# nova reboot --hard <UUID of instance `POC`>
Expected result
===============
After step4, instance `POC` will be active
After step5, instance `POC` will be active
Actual result
=============
After step4, instance `POC` stuck shutoff.
After step5, instance `POC` is still shutoff, stuck powering-on and
finally became error.
Environment
===========
1. version: OpenStack Rocky + centOS7
2. hypervisor: Libvirt + KVM
3. storage: Ceph
4. networking Neutron with OpenVSwitch
Logs & Configs
==============
(1) nova.conf in compute node:
[DEFAULT]
...
config_drive_format=vfat
force_config_drive=true
flat_injected=true
...
(2) nova-compute.log in compute node:
2019-05-02 12:32:35.000 6 INFO nova.compute.manager [req-2a9948c2-0c51-4950-9a40-3d72d362ead8 9fef2099c3254226a96e48311d124131
380f701f5575430195526229dc143a1f - - -] [instance: 27730cc2-25ba-4ebc-a73d-f8d2e071ae92] Rebooting instance
2019-05-02 12:32:36.030 6 INFO nova.virt.libvirt.driver [-] [instance: 27730cc2-25ba-4ebc-a73d-f8d2e071ae92] Instance destroyed successfully.
2019-05-02 12:32:38.128 6 WARNING nova.virt.osinfo [req-2a9948c2-0c51-4950-9a40-3d72d362ead8 9fef2099c3254226a96e48311d124131 380f701f5575430195526229dc143a1f - - -] Cannot find OS information - Reason: (No configuration information found for operating system CirrOS-64)
2019-05-02 12:32:38.129 6 WARNING nova.virt.osinfo [req-2a9948c2-0c51-4950-9a40-3d72d362ead8 9fef2099c3254226a96e48311d124131 380f701f5575430195526229dc143a1f - - -] Cannot find OS information - Reason: (No configuration information found for operating system CirrOS-64)
2019-05-02 12:32:38.890 6 WARNING nova.virt.osinfo [req-2a9948c2-0c51-4950-9a40-3d72d362ead8 9fef2099c3254226a96e48311d124131 380f701f5575430195526229dc143a1f - - -] Cannot find OS information - Reason: (No configuration information found for operating system CirrOS-64)
2019-05-02 12:32:38.898 6 INFO nova.virt.libvirt.driver [req-2a9948c2-0c51-4950-9a40-3d72d362ead8 9fef2099c3254226a96e48311d124131 380f701f5575430195526229dc143a1f - - -] before plug_vifs
2019-05-02 12:32:38.938 6 INFO os_vif [req-2a9948c2-0c51-4950-9a40-3d72d362ead8 9fef2099c3254226a96e48311d124131 380f701f5575430195526229dc143a1f - - -] Successfully plugged vif VIFBridge(active=True,address=fa:16:3e:36:a2:36,bridge_name='qbr60e943ff-4e',has_traffic_filtering=True,id=60e943ff-4e12-491b-a39c-0eb6bdca7ebb,network=Network(ed4829d3-d1b8-40fa-ab11-c59772a0d68e),plugin='ovs',port_profile=VIFPortProfileBase,preserve_on_delete=False,vif_name='tap60e943ff-4e')
2019-05-02 12:32:38.939 6 INFO nova.virt.libvirt.driver [req-2a9948c2-0c51-4950-9a40-3d72d362ead8 9fef2099c3254226a96e48311d124131 380f701f5575430195526229dc143a1f - - -] after plug_vifs
2019-05-02 12:32:38.939 6 INFO nova.virt.libvirt.driver [req-2a9948c2-0c51-4950-9a40-3d72d362ead8 9fef2099c3254226a96e48311d124131 380f701f5575430195526229dc143a1f - - -] after setup_basic_filtering
2019-05-02 12:32:38.940 6 INFO nova.virt.libvirt.driver [req-2a9948c2-0c51-4950-9a40-3d72d362ead8 9fef2099c3254226a96e48311d124131 380f701f5575430195526229dc143a1f - - -] after prepare_instance_filter
2019-05-02 12:32:38.940 6 INFO nova.virt.libvirt.driver [req-2a9948c2-0c51-4950-9a40-3d72d362ead8 9fef2099c3254226a96e48311d124131 380f701f5575430195526229dc143a1f - - -] before _create_domain
2019-05-02 12:32:40.772 6 ERROR nova.virt.libvirt.guest [req-2a9948c2-0c51-4950-9a40-3d72d362ead8 9fef2099c3254226a96e48311d124131 380f701f5575430195526229dc143a1f - - -] Error launching a defined domain with XML: <domain type='kvm'>
<name>instance-000000c1</name>
<uuid>27730cc2-25ba-4ebc-a73d-f8d2e071ae92</uuid>
<metadata>
<nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.0">
<nova:package version="0.0.1"/>
<nova:name>jingyu</nova:name>
<nova:creationTime>2019-05-02 04:32:38</nova:creationTime>
<nova:flavor name="2-2-1">
<nova:memory>2048</nova:memory>
<nova:disk>1</nova:disk>
<nova:swap>0</nova:swap>
<nova:ephemeral>0</nova:ephemeral>
<nova:vcpus>2</nova:vcpus>
</nova:flavor>
<nova:owner>
<nova:user uuid="9fef2099c3254226a96e48311d124131">admin</nova:user>
<nova:project uuid="380f701f5575430195526229dc143a1f">admin</nova:project>
</nova:owner>
<nova:root type="image" uuid="da4e5e0b-e421-434c-a970-7b2ac680b3b5"/>
</nova:instance>
</metadata>
<memory unit='KiB'>2097152</memory>
<currentMemory unit='KiB'>2097152</currentMemory>
<vcpu placement='static' current='2'>4</vcpu>
<cputune>
<shares>2048</shares>
</cputune>
<sysinfo type='smbios'>
<system>
<entry name='manufacturer'>OpenStack Foundation</entry>
<entry name='product'>OpenStack Nova</entry>
<entry name='version'>0.0.1</entry>
<entry name='serial'>d8127418-14a7-50e1-9e31-6f9fe4de8ca2</entry>
<entry name='uuid'>27730cc2-25ba-4ebc-a73d-f8d2e071ae92</entry>
<entry name='family'>Virtual Machine</entry>
</system>
</sysinfo>
<os>
<type arch='x86_64' machine='pc-i440fx-rhel7.3.0'>hvm</type>
<boot dev='hd'/>
<smbios mode='sysinfo'/>
</os>
<features>
<acpi/>
<apic/>
</features>
<cpu mode='host-model'>
<model fallback='allow'/>
<topology sockets='2' cores='1' threads='2'/>
</cpu>
<clock offset='utc'>
<timer name='pit' tickpolicy='delay'/>
<timer name='rtc' tickpolicy='catchup'/>
<timer name='hpet' present='no'/>
</clock>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<devices>
<emulator>/usr/libexec/qemu-kvm</emulator>
<disk type='network' device='disk'>
<driver name='qemu' type='raw' cache='none'/>
<auth username='admin'>
<secret type='ceph' uuid='acf3fb4f-94b9-45b8-bcd4-4a6a7fac1f4e'/>
</auth>
<source protocol='rbd' name='vms/27730cc2-25ba-4ebc-a73d-f8d2e071ae92_disk'>
<host name='100.2.29.231' port='6789'/>
</source>
<target dev='vda' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</disk>
<disk type='network' device='disk'>
<driver name='qemu' type='raw' cache='none'/>
<auth username='admin'>
<secret type='ceph' uuid='acf3fb4f-94b9-45b8-bcd4-4a6a7fac1f4e'/>
</auth>
<source protocol='rbd' name='vms/27730cc2-25ba-4ebc-a73d-f8d2e071ae92_disk.config'>
<host name='100.2.29.231' port='6789'/>
</source>
<target dev='vdb' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</disk>
<controller type='usb' index='0'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
</controller>
<controller type='pci' index='0' model='pci-root'/>
<interface type='bridge'>
<mac address='fa:16:3e:36:a2:36'/>
<source bridge='qbr60e943ff-4e'/>
<target dev='tap60e943ff-4e'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
<serial type='file'>
<source path='/var/lib/nova/instances/27730cc2-25ba-4ebc-a73d-f8d2e071ae92/console.log'/>
<target port='0'/>
</serial>
<serial type='pty'>
<target port='1'/>
</serial>
<console type='file'>
<source path='/var/lib/nova/instances/27730cc2-25ba-4ebc-a73d-f8d2e071ae92/console.log'/>
<target type='serial' port='0'/>
</console>
<input type='tablet' bus='usb'>
<address type='usb' bus='0' port='1'/>
</input>
<input type='mouse' bus='ps2'/>
<input type='keyboard' bus='ps2'/>
<graphics type='vnc' port='-1' autoport='yes' listen='100.2.96.159' keymap='en-us'>
<listen type='address' address='100.2.96.159'/>
</graphics>
<video>
<model type='cirrus' vram='16384' heads='1' primary='yes'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
</video>
<memballoon model='virtio'>
<stats period='10'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
</memballoon>
</devices>
</domain>
2019-05-02 12:32:40.773 6 ERROR nova.virt.libvirt.driver [req-2a9948c2-0c51-4950-9a40-3d72d362ead8 9fef2099c3254226a96e48311d124131 380f701f5575430195526229dc143a1f - - -] [instance: 27730cc2-25ba-4ebc-a73d-f8d2e071ae92] Failed to start libvirt guest
2019-05-02 12:32:41.058 6 INFO os_vif [req-2a9948c2-0c51-4950-9a40-3d72d362ead8 9fef2099c3254226a96e48311d124131 380f701f5575430195526229dc143a1f - - -] Successfully unplugged vif VIFBridge(active=True,address=fa:16:3e:36:a2:36,bridge_name='qbr60e943ff-4e',has_traffic_filtering=True,id=60e943ff-4e12-491b-a39c-0eb6bdca7ebb,network=Network(ed4829d3-d1b8-40fa-ab11-c59772a0d68e),plugin='ovs',port_profile=VIFPortProfileBase,preserve_on_delete=False,vif_name='tap60e943ff-4e')
2019-05-02 12:32:41.064 6 ERROR nova.compute.manager [req-2a9948c2-0c51-4950-9a40-3d72d362ead8 9fef2099c3254226a96e48311d124131 380f701f5575430195526229dc143a1f - - -] [instance: 27730cc2-25ba-4ebc-a73d-f8d2e071ae92] Cannot reboot instance: internal error: qemu unexpectedly closed the monitor: 2019-05-02T04:32:40.560702Z qemu-kvm: -drive file=rbd:vms/27730cc2-25ba-4ebc-a73d-f8d2e071ae92_disk.config:id=admin:auth_supported=cephx\;none:mon_host=100.2.29.231\:6789,file.password-secret=virtio-disk1-secret0,format=raw,if=none,id=drive-virtio-disk1,cache=none: error reading header from 27730cc2-25ba-4ebc-a73d-f8d2e071ae92_disk.config: No such file or directory
2019-05-02 12:32:41.865 6 INFO nova.compute.manager [req-2a9948c2-0c51-4950-9a40-3d72d362ead8 9fef2099c3254226a96e48311d124131 380f701f5575430195526229dc143a1f - - -] [instance: 27730cc2-25ba-4ebc-a73d-f8d2e071ae92] Successfully reverted task state from reboot_started_hard on failure for instance.
2019-05-02 12:32:41.890 6 ERROR oslo_messaging.rpc.server [req-2a9948c2-0c51-4950-9a40-3d72d362ead8 9fef2099c3254226a96e48311d124131 380f701f5575430195526229dc143a1f - - -] Exception during message handling
2019-05-02 12:32:41.890 6 ERROR oslo_messaging.rpc.server Traceback (most recent call last):
2019-05-02 12:32:41.890 6 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 155, in _process_incoming
2019-05-02 12:32:41.890 6 ERROR oslo_messaging.rpc.server res = self.dispatcher.dispatch(message)
2019-05-02 12:32:41.890 6 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 222, in dispatch
2019-05-02 12:32:41.890 6 ERROR oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, ctxt, args)
2019-05-02 12:32:41.890 6 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 192, in _do_dispatch
2019-05-02 12:32:41.890 6 ERROR oslo_messaging.rpc.server result = func(ctxt, **new_args)
2019-05-02 12:32:41.890 6 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python2.7/site-packages/nova/exception_wrapper.py", line 75, in wrapped
2019-05-02 12:32:41.890 6 ERROR oslo_messaging.rpc.server function_name, call_dict, binary)
2019-05-02 12:32:41.890 6 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
2019-05-02 12:32:41.890 6 ERROR oslo_messaging.rpc.server self.force_reraise()
2019-05-02 12:32:41.890 6 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise
2019-05-02 12:32:41.890 6 ERROR oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb)
2019-05-02 12:32:41.890 6 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python2.7/site-packages/nova/exception_wrapper.py", line 66, in wrapped
2019-05-02 12:32:41.890 6 ERROR oslo_messaging.rpc.server return f(self, context, *args, **kw)
2019-05-02 12:32:41.890 6 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python2.7/site-packages/nova/compute/manager.py", line 189, in decorated_function
2019-05-02 12:32:41.890 6 ERROR oslo_messaging.rpc.server LOG.warning(msg, e, instance=instance)
2019-05-02 12:32:41.890 6 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
2019-05-02 12:32:41.890 6 ERROR oslo_messaging.rpc.server self.force_reraise()
2019-05-02 12:32:41.890 6 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise
2019-05-02 12:32:41.890 6 ERROR oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb)
2019-05-02 12:32:41.890 6 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python2.7/site-packages/nova/compute/manager.py", line 158, in decorated_function
2019-05-02 12:32:41.890 6 ERROR oslo_messaging.rpc.server return function(self, context, *args, **kwargs)
2019-05-02 12:32:41.890 6 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python2.7/site-packages/nova/compute/utils.py", line 686, in decorated_function
2019-05-02 12:32:41.890 6 ERROR oslo_messaging.rpc.server return function(self, context, *args, **kwargs)
2019-05-02 12:32:41.890 6 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python2.7/site-packages/nova/compute/manager.py", line 217, in decorated_function
2019-05-02 12:32:41.890 6 ERROR oslo_messaging.rpc.server kwargs['instance'], e, sys.exc_info())
2019-05-02 12:32:41.890 6 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
2019-05-02 12:32:41.890 6 ERROR oslo_messaging.rpc.server self.force_reraise()
2019-05-02 12:32:41.890 6 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise
2019-05-02 12:32:41.890 6 ERROR oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb)
2019-05-02 12:32:41.890 6 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python2.7/site-packages/nova/compute/manager.py", line 205, in decorated_function
2019-05-02 12:32:41.890 6 ERROR oslo_messaging.rpc.server return function(self, context, *args, **kwargs)
2019-05-02 12:32:41.890 6 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python2.7/site-packages/nova/compute/manager.py", line 3048, in reboot_instance
2019-05-02 12:32:41.890 6 ERROR oslo_messaging.rpc.server self._set_instance_obj_error_state(context, instance)
2019-05-02 12:32:41.890 6 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
2019-05-02 12:32:41.890 6 ERROR oslo_messaging.rpc.server self.force_reraise()
2019-05-02 12:32:41.890 6 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise
2019-05-02 12:32:41.890 6 ERROR oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb)
2019-05-02 12:32:41.890 6 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python2.7/site-packages/nova/compute/manager.py", line 3029, in reboot_instance
2019-05-02 12:32:41.890 6 ERROR oslo_messaging.rpc.server bad_volumes_callback=bad_volumes_callback)
2019-05-02 12:32:41.890 6 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2318, in reboot
2019-05-02 12:32:41.890 6 ERROR oslo_messaging.rpc.server block_device_info)
2019-05-02 12:32:41.890 6 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2422, in _hard_reboot
2019-05-02 12:32:41.890 6 ERROR oslo_messaging.rpc.server vifs_already_plugged=True)
2019-05-02 12:32:41.890 6 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5151, in _create_domain_and_network
2019-05-02 12:32:41.890 6 ERROR oslo_messaging.rpc.server destroy_disks_on_failure)
2019-05-02 12:32:41.890 6 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
2019-05-02 12:32:41.890 6 ERROR oslo_messaging.rpc.server self.force_reraise()
2019-05-02 12:32:41.890 6 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise
2019-05-02 12:32:41.890 6 ERROR oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb)
2019-05-02 12:32:41.890 6 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5122, in _create_domain_and_network
2019-05-02 12:32:41.890 6 ERROR oslo_messaging.rpc.server post_xml_callback=post_xml_callback)
2019-05-02 12:32:41.890 6 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5035, in _create_domain
2019-05-02 12:32:41.890 6 ERROR oslo_messaging.rpc.server guest.launch(pause=pause)
2019-05-02 12:32:41.890 6 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python2.7/site-packages/nova/virt/libvirt/guest.py", line 145, in launch
2019-05-02 12:32:41.890 6 ERROR oslo_messaging.rpc.server self._encoded_xml, errors='ignore')
2019-05-02 12:32:41.890 6 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
2019-05-02 12:32:41.890 6 ERROR oslo_messaging.rpc.server self.force_reraise()
2019-05-02 12:32:41.890 6 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise
2019-05-02 12:32:41.890 6 ERROR oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb)
2019-05-02 12:32:41.890 6 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python2.7/site-packages/nova/virt/libvirt/guest.py", line 140, in launch
2019-05-02 12:32:41.890 6 ERROR oslo_messaging.rpc.server return self._domain.createWithFlags(flags)
2019-05-02 12:32:41.890 6 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python2.7/site-packages/eventlet/tpool.py", line 186, in doit
2019-05-02 12:32:41.890 6 ERROR oslo_messaging.rpc.server result = proxy_call(self._autowrap, f, *args, **kwargs)
2019-05-02 12:32:41.890 6 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python2.7/site-packages/eventlet/tpool.py", line 144, in proxy_call
2019-05-02 12:32:41.890 6 ERROR oslo_messaging.rpc.server rv = execute(f, *args, **kwargs)
2019-05-02 12:32:41.890 6 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python2.7/site-packages/eventlet/tpool.py", line 125, in execute
2019-05-02 12:32:41.890 6 ERROR oslo_messaging.rpc.server six.reraise(c, e, tb)
2019-05-02 12:32:41.890 6 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python2.7/site-packages/eventlet/tpool.py", line 83, in tworker
2019-05-02 12:32:41.890 6 ERROR oslo_messaging.rpc.server rv = meth(*args, **kwargs)
2019-05-02 12:32:41.890 6 ERROR oslo_messaging.rpc.server File "/var/lib/kolla/venv/lib/python2.7/site-packages/libvirt.py", line 1065, in createWithFlags
2019-05-02 12:32:41.890 6 ERROR oslo_messaging.rpc.server if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed', dom=self)
2019-05-02 12:32:41.890 6 ERROR oslo_messaging.rpc.server libvirtError: internal error: qemu unexpectedly closed the monitor: 2019-05-02T04:32:40.560702Z qemu-kvm: -drive file=rbd:vms/27730cc2-25ba-4ebc-a73d-f8d2e071ae92_disk.config:id=admin:auth_supported=cephx\;none:mon_host=100.2.29.231\:6789,file.password-secret=virtio-disk1-secret0,format=raw,if=none,id=drive-virtio-disk1,cache=none: error reading header from 27730cc2-25ba-4ebc-a73d-f8d2e071ae92_disk.config: No such file or directory
To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1827492/+subscriptions
References