yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #76722
[Bug 1813192] [NEW] libvirt: instance delete fails with "Cannot destroy instance, operation time out: libvirt.libvirtError: Timed out during operation: cannot acquire state change lock (held by remoteDispatchDomainBlockJobAbort)" in bionic nodes (libvirt 4.0.0, qemu 2.11)
Public bug reported:
Seen here:
http://logs.openstack.org/71/605871/10/gate/tempest-full-
py3/a35b351/controller/logs/screen-n-cpu.txt.gz#_Jan_23_10_54_54_897840
Jan 23 10:54:54.897840 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: WARNING nova.virt.libvirt.driver [None req-b9596061-eba4-4729-a239-da35f738fd1c tempest-ImagesTestJSON-884514011 tempest-ImagesTestJSON-884514011] [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8] Cannot destroy instance, operation time out: libvirt.libvirtError: Timed out during operation: cannot acquire state change lock (held by remoteDispatchDomainBlockJobAbort)
Jan 23 10:54:54.919731 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [None req-b9596061-eba4-4729-a239-da35f738fd1c tempest-ImagesTestJSON-884514011 tempest-ImagesTestJSON-884514011] [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8] Setting instance vm_state to ERROR: nova.exception.InstancePowerOffFailure: Failed to power off instance: operation time out
Jan 23 10:54:54.919731 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8] Traceback (most recent call last):
Jan 23 10:54:54.919731 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8] File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 890, in _destroy
Jan 23 10:54:54.919731 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8] guest.poweroff()
Jan 23 10:54:54.919731 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8] File "/opt/stack/nova/nova/virt/libvirt/guest.py", line 148, in poweroff
Jan 23 10:54:54.919731 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8] self._domain.destroy()
Jan 23 10:54:54.919731 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8] File "/usr/local/lib/python3.6/dist-packages/eventlet/tpool.py", line 190, in doit
Jan 23 10:54:54.919731 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8] result = proxy_call(self._autowrap, f, *args, **kwargs)
Jan 23 10:54:54.919731 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8] File "/usr/local/lib/python3.6/dist-packages/eventlet/tpool.py", line 148, in proxy_call
Jan 23 10:54:54.919731 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8] rv = execute(f, *args, **kwargs)
Jan 23 10:54:54.919731 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8] File "/usr/local/lib/python3.6/dist-packages/eventlet/tpool.py", line 129, in execute
Jan 23 10:54:54.919731 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8] six.reraise(c, e, tb)
Jan 23 10:54:54.919731 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8] File "/usr/local/lib/python3.6/dist-packages/six.py", line 693, in reraise
Jan 23 10:54:54.919731 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8] raise value
Jan 23 10:54:54.919731 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8] File "/usr/local/lib/python3.6/dist-packages/eventlet/tpool.py", line 83, in tworker
Jan 23 10:54:54.919731 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8] rv = meth(*args, **kwargs)
Jan 23 10:54:54.919731 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8] File "/usr/local/lib/python3.6/dist-packages/libvirt.py", line 1142, in destroy
Jan 23 10:54:54.919731 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8] if ret == -1: raise libvirtError ('virDomainDestroy() failed', dom=self)
Jan 23 10:54:54.919731 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8] libvirt.libvirtError: Timed out during operation: cannot acquire state change lock (held by remoteDispatchDomainBlockJobAbort)
Jan 23 10:54:54.919731 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8]
Jan 23 10:54:54.919731 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8] During handling of the above exception, another exception occurred:
Jan 23 10:54:54.919731 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8]
Jan 23 10:54:54.919731 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8] Traceback (most recent call last):
Jan 23 10:54:54.919731 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8] File "/opt/stack/nova/nova/compute/manager.py", line 2673, in do_terminate_instance
Jan 23 10:54:54.919731 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8] self._delete_instance(context, instance, bdms)
Jan 23 10:54:54.919731 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8] File "/opt/stack/nova/nova/hooks.py", line 154, in inner
Jan 23 10:54:54.919731 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8] rv = f(*args, **kwargs)
Jan 23 10:54:54.919731 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8] File "/opt/stack/nova/nova/compute/manager.py", line 2610, in _delete_instance
Jan 23 10:54:54.919731 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8] self._shutdown_instance(context, instance, bdms)
Jan 23 10:54:54.919731 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8] File "/opt/stack/nova/nova/compute/manager.py", line 2495, in _shutdown_instance
Jan 23 10:54:54.919731 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8] pass
Jan 23 10:54:54.919731 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8] File "/usr/local/lib/python3.6/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
Jan 23 10:54:54.919731 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8] self.force_reraise()
Jan 23 10:54:54.919731 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8] File "/usr/local/lib/python3.6/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise
Jan 23 10:54:54.919731 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8] six.reraise(self.type_, self.value, self.tb)
Jan 23 10:54:54.919731 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8] File "/usr/local/lib/python3.6/dist-packages/six.py", line 693, in reraise
Jan 23 10:54:54.919731 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8] raise value
Jan 23 10:54:54.919731 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8] File "/opt/stack/nova/nova/compute/manager.py", line 2489, in _shutdown_instance
Jan 23 10:54:54.919731 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8] block_device_info)
Jan 23 10:54:54.919731 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8] File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 1000, in destroy
Jan 23 10:54:54.925777 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8] self._destroy(instance)
Jan 23 10:54:54.925777 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8] File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 920, in _destroy
Jan 23 10:54:54.925777 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8] raise exception.InstancePowerOffFailure(reason=reason)
Jan 23 10:54:54.925777 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8] nova.exception.InstancePowerOffFailure: Failed to power off instance: operation time out
Jan 23 10:54:54.925777 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8]
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22Cannot%20destroy%20instance%2C%20operation%20time%20out%3A%20libvirt.libvirtError%3A%20Timed%20out%20during%20operation%3A%20cannot%20acquire%20state%20change%20lock%20(held%20by%20remoteDispatchDomainBlockJobAbort)%5C%22%20AND%20tags%3A%5C%22screen-n-cpu.txt%5C%22&from=7d
3 hits in 7 days, check and gate, all failures, and only on ubuntu-
bionic nodes.
>From the libvirtd logs, I see a lot of this for the guest that fails to
delete:
2019-01-23 10:54:25.336+0000: 22451: debug : qemuMonitorIO:767 : Error on monitor Unable to read from monitor: Connection reset by peer
2019-01-23 10:54:25.336+0000: 22451: debug : qemuMonitorIO:788 : Triggering EOF callback
2019-01-23 10:54:25.336+0000: 22451: debug : qemuProcessHandleMonitorEOF:289 : Received EOF on 0x7f25c4039340 'instance-0000000f'
2019-01-23 10:54:25.336+0000: 22451: debug : qemuProcessHandleMonitorEOF:293 : Domain is being destroyed, EOF is expected
2019-01-23 10:54:25.336+0000: 22451: debug : qemuMonitorIO:767 : Error on monitor Unable to read from monitor: Connection reset by peer
2019-01-23 10:54:25.336+0000: 22451: debug : qemuMonitorIO:788 : Triggering EOF callback
2019-01-23 10:54:25.336+0000: 22451: debug : qemuProcessHandleMonitorEOF:289 : Received EOF on 0x7f25c4039340 'instance-0000000f'
2019-01-23 10:54:25.336+0000: 22451: debug : qemuProcessHandleMonitorEOF:293 : Domain is being destroyed, EOF is expected
It seems that libvirt gets stuck in some kind of loop.
** Affects: nova
Importance: Medium
Status: Confirmed
** Tags: gate-failure libvirt
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1813192
Title:
libvirt: instance delete fails with "Cannot destroy instance,
operation time out: libvirt.libvirtError: Timed out during operation:
cannot acquire state change lock (held by
remoteDispatchDomainBlockJobAbort)" in bionic nodes (libvirt 4.0.0,
qemu 2.11)
Status in OpenStack Compute (nova):
Confirmed
Bug description:
Seen here:
http://logs.openstack.org/71/605871/10/gate/tempest-full-
py3/a35b351/controller/logs/screen-n-cpu.txt.gz#_Jan_23_10_54_54_897840
Jan 23 10:54:54.897840 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: WARNING nova.virt.libvirt.driver [None req-b9596061-eba4-4729-a239-da35f738fd1c tempest-ImagesTestJSON-884514011 tempest-ImagesTestJSON-884514011] [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8] Cannot destroy instance, operation time out: libvirt.libvirtError: Timed out during operation: cannot acquire state change lock (held by remoteDispatchDomainBlockJobAbort)
Jan 23 10:54:54.919731 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [None req-b9596061-eba4-4729-a239-da35f738fd1c tempest-ImagesTestJSON-884514011 tempest-ImagesTestJSON-884514011] [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8] Setting instance vm_state to ERROR: nova.exception.InstancePowerOffFailure: Failed to power off instance: operation time out
Jan 23 10:54:54.919731 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8] Traceback (most recent call last):
Jan 23 10:54:54.919731 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8] File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 890, in _destroy
Jan 23 10:54:54.919731 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8] guest.poweroff()
Jan 23 10:54:54.919731 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8] File "/opt/stack/nova/nova/virt/libvirt/guest.py", line 148, in poweroff
Jan 23 10:54:54.919731 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8] self._domain.destroy()
Jan 23 10:54:54.919731 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8] File "/usr/local/lib/python3.6/dist-packages/eventlet/tpool.py", line 190, in doit
Jan 23 10:54:54.919731 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8] result = proxy_call(self._autowrap, f, *args, **kwargs)
Jan 23 10:54:54.919731 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8] File "/usr/local/lib/python3.6/dist-packages/eventlet/tpool.py", line 148, in proxy_call
Jan 23 10:54:54.919731 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8] rv = execute(f, *args, **kwargs)
Jan 23 10:54:54.919731 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8] File "/usr/local/lib/python3.6/dist-packages/eventlet/tpool.py", line 129, in execute
Jan 23 10:54:54.919731 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8] six.reraise(c, e, tb)
Jan 23 10:54:54.919731 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8] File "/usr/local/lib/python3.6/dist-packages/six.py", line 693, in reraise
Jan 23 10:54:54.919731 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8] raise value
Jan 23 10:54:54.919731 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8] File "/usr/local/lib/python3.6/dist-packages/eventlet/tpool.py", line 83, in tworker
Jan 23 10:54:54.919731 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8] rv = meth(*args, **kwargs)
Jan 23 10:54:54.919731 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8] File "/usr/local/lib/python3.6/dist-packages/libvirt.py", line 1142, in destroy
Jan 23 10:54:54.919731 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8] if ret == -1: raise libvirtError ('virDomainDestroy() failed', dom=self)
Jan 23 10:54:54.919731 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8] libvirt.libvirtError: Timed out during operation: cannot acquire state change lock (held by remoteDispatchDomainBlockJobAbort)
Jan 23 10:54:54.919731 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8]
Jan 23 10:54:54.919731 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8] During handling of the above exception, another exception occurred:
Jan 23 10:54:54.919731 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8]
Jan 23 10:54:54.919731 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8] Traceback (most recent call last):
Jan 23 10:54:54.919731 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8] File "/opt/stack/nova/nova/compute/manager.py", line 2673, in do_terminate_instance
Jan 23 10:54:54.919731 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8] self._delete_instance(context, instance, bdms)
Jan 23 10:54:54.919731 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8] File "/opt/stack/nova/nova/hooks.py", line 154, in inner
Jan 23 10:54:54.919731 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8] rv = f(*args, **kwargs)
Jan 23 10:54:54.919731 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8] File "/opt/stack/nova/nova/compute/manager.py", line 2610, in _delete_instance
Jan 23 10:54:54.919731 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8] self._shutdown_instance(context, instance, bdms)
Jan 23 10:54:54.919731 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8] File "/opt/stack/nova/nova/compute/manager.py", line 2495, in _shutdown_instance
Jan 23 10:54:54.919731 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8] pass
Jan 23 10:54:54.919731 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8] File "/usr/local/lib/python3.6/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
Jan 23 10:54:54.919731 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8] self.force_reraise()
Jan 23 10:54:54.919731 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8] File "/usr/local/lib/python3.6/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise
Jan 23 10:54:54.919731 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8] six.reraise(self.type_, self.value, self.tb)
Jan 23 10:54:54.919731 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8] File "/usr/local/lib/python3.6/dist-packages/six.py", line 693, in reraise
Jan 23 10:54:54.919731 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8] raise value
Jan 23 10:54:54.919731 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8] File "/opt/stack/nova/nova/compute/manager.py", line 2489, in _shutdown_instance
Jan 23 10:54:54.919731 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8] block_device_info)
Jan 23 10:54:54.919731 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8] File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 1000, in destroy
Jan 23 10:54:54.925777 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8] self._destroy(instance)
Jan 23 10:54:54.925777 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8] File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 920, in _destroy
Jan 23 10:54:54.925777 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8] raise exception.InstancePowerOffFailure(reason=reason)
Jan 23 10:54:54.925777 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8] nova.exception.InstancePowerOffFailure: Failed to power off instance: operation time out
Jan 23 10:54:54.925777 ubuntu-bionic-limestone-regionone-0002045447 nova-compute[19182]: ERROR nova.compute.manager [instance: c2b71d99-ed28-4074-b5b4-b4dff57221b8]
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22Cannot%20destroy%20instance%2C%20operation%20time%20out%3A%20libvirt.libvirtError%3A%20Timed%20out%20during%20operation%3A%20cannot%20acquire%20state%20change%20lock%20(held%20by%20remoteDispatchDomainBlockJobAbort)%5C%22%20AND%20tags%3A%5C%22screen-n-cpu.txt%5C%22&from=7d
3 hits in 7 days, check and gate, all failures, and only on ubuntu-
bionic nodes.
From the libvirtd logs, I see a lot of this for the guest that fails
to delete:
2019-01-23 10:54:25.336+0000: 22451: debug : qemuMonitorIO:767 : Error on monitor Unable to read from monitor: Connection reset by peer
2019-01-23 10:54:25.336+0000: 22451: debug : qemuMonitorIO:788 : Triggering EOF callback
2019-01-23 10:54:25.336+0000: 22451: debug : qemuProcessHandleMonitorEOF:289 : Received EOF on 0x7f25c4039340 'instance-0000000f'
2019-01-23 10:54:25.336+0000: 22451: debug : qemuProcessHandleMonitorEOF:293 : Domain is being destroyed, EOF is expected
2019-01-23 10:54:25.336+0000: 22451: debug : qemuMonitorIO:767 : Error on monitor Unable to read from monitor: Connection reset by peer
2019-01-23 10:54:25.336+0000: 22451: debug : qemuMonitorIO:788 : Triggering EOF callback
2019-01-23 10:54:25.336+0000: 22451: debug : qemuProcessHandleMonitorEOF:289 : Received EOF on 0x7f25c4039340 'instance-0000000f'
2019-01-23 10:54:25.336+0000: 22451: debug : qemuProcessHandleMonitorEOF:293 : Domain is being destroyed, EOF is expected
It seems that libvirt gets stuck in some kind of loop.
To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1813192/+subscriptions