yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #71315
[Bug 1750680] Re: Nova returns a traceback when it's unable to detach a volume still in use
Reviewed: https://review.openstack.org/546423
Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=b16c0f10539a6c6b547a70a41c75ef723fc618ce
Submitter: Zuul
Branch: master
commit b16c0f10539a6c6b547a70a41c75ef723fc618ce
Author: Dan Smith <dansmith@xxxxxxxxxx>
Date: Tue Feb 20 14:41:35 2018 -0800
Avoid exploding if guest refuses to detach a volume
When we run detach_volume(), the guest has to respond to the ACPI
eject request in order for us to proceed. It may not do this at all
if the volume is mounted or in-use, or may not by the time we time out
if lots of dirty data needs flushing. Right now, we let the failure
exception bubble up to our caller and we log a nasty stack trace, which
doesn't really convey the reason (and that it's an expected and
reasonable thing to happen).
Thus, this patch catches that, logs the situation at warning level and
avoids the trace.
Change-Id: I3800b466a50b1e5f5d1e8c8a963d9a6258af67ee
Closes-Bug: #1750680
** Changed in: nova
Status: In Progress => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1750680
Title:
Nova returns a traceback when it's unable to detach a volume still in
use
Status in OpenStack Compute (nova):
Fix Released
Bug description:
Description
===========
If libvirt is unable to detach a volume because it's still in-use by the guest (either mounted and/or file opened), nova returns a traceback.
Steps to reproduce
==================
* Create an instance with volume attached using heat
* Make sure there's activity on the volume
* Delete stack
Expected result
===============
We would expect nova to not return a traceback but a clean log about its incapacity to detach volume. If possible, that would be great if that exception was raised back to either cinder or heat.
Actual result
=============
```
21495 2018-02-14 20:31:09.735 1 ERROR oslo.service.loopingcall [-] Dynamic interval looping call 'oslo_service.loopingcall._func' failed: DeviceDetachFailed: Device detach failed for vdf: Unable to detach from guest transient domain.
21496 2018-02-14 20:31:09.735 1 ERROR oslo.service.loopingcall Traceback (most recent call last):
21497 2018-02-14 20:31:09.735 1 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/site-packages/oslo_service/loopingcall.py", line 137, in _run_loop
21498 2018-02-14 20:31:09.735 1 ERROR oslo.service.loopingcall result = func(*self.args, **self.kw)
21499 2018-02-14 20:31:09.735 1 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/site-packages/oslo_service/loopingcall.py", line 415, in _func
21500 2018-02-14 20:31:09.735 1 ERROR oslo.service.loopingcall return self._sleep_time
21501 2018-02-14 20:31:09.735 1 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
21502 2018-02-14 20:31:09.735 1 ERROR oslo.service.loopingcall self.force_reraise()
21503 2018-02-14 20:31:09.735 1 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise
21504 2018-02-14 20:31:09.735 1 ERROR oslo.service.loopingcall six.reraise(self.type_, self.value, self.tb)
21505 2018-02-14 20:31:09.735 1 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/site-packages/oslo_service/loopingcall.py", line 394, in _func
21506 2018-02-14 20:31:09.735 1 ERROR oslo.service.loopingcall result = f(*args, **kwargs)
21507 2018-02-14 20:31:09.735 1 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/guest.py", line 462, in _do_wait_and_retry_detach
21508 2018-02-14 20:31:09.735 1 ERROR oslo.service.loopingcall device=alternative_device_name, reason=reason)
21509 2018-02-14 20:31:09.735 1 ERROR oslo.service.loopingcall DeviceDetachFailed: Device detach failed for vdf: Unable to detach from guest transient domain.
```
Environment
===========
* Red Hat Openstack 12
```
libvirt-3.2.0-14.el7_4.7.x86_64 Fri Jan 26 15:28:48 2018
libvirt-client-3.2.0-14.el7_4.7.x86_64 Fri Jan 26 15:26:07 2018
libvirt-daemon-3.2.0-14.el7_4.7.x86_64 Fri Jan 26 15:26:02 2018
libvirt-daemon-config-network-3.2.0-14.el7_4.7.x86_64 Fri Jan 26 15:26:06 2018
libvirt-daemon-config-nwfilter-3.2.0-14.el7_4.7.x86_64 Fri Jan 26 15:26:05 2018
libvirt-daemon-driver-interface-3.2.0-14.el7_4.7.x86_64 Fri Jan 26 15:26:05 2018
libvirt-daemon-driver-lxc-3.2.0-14.el7_4.7.x86_64 Fri Jan 26 15:26:06 2018
libvirt-daemon-driver-network-3.2.0-14.el7_4.7.x86_64 Fri Jan 26 15:26:02 2018
libvirt-daemon-driver-nodedev-3.2.0-14.el7_4.7.x86_64 Fri Jan 26 15:26:05 2018
libvirt-daemon-driver-nwfilter-3.2.0-14.el7_4.7.x86_64 Fri Jan 26 15:26:04 2018
libvirt-daemon-driver-qemu-3.2.0-14.el7_4.7.x86_64 Fri Jan 26 15:27:25 2018
libvirt-daemon-driver-secret-3.2.0-14.el7_4.7.x86_64 Fri Jan 26 15:26:04 2018
libvirt-daemon-driver-storage-3.2.0-14.el7_4.7.x86_64 Fri Jan 26 15:27:29 2018
libvirt-daemon-driver-storage-core-3.2.0-14.el7_4.7.x86_64 Fri Jan 26 15:27:25 2018
libvirt-daemon-driver-storage-disk-3.2.0-14.el7_4.7.x86_64 Fri Jan 26 15:27:28 2018
libvirt-daemon-driver-storage-gluster-3.2.0-14.el7_4.7.x86_64 Fri Jan 26 15:27:29 2018
libvirt-daemon-driver-storage-iscsi-3.2.0-14.el7_4.7.x86_64 Fri Jan 26 15:27:28 2018
libvirt-daemon-driver-storage-logical-3.2.0-14.el7_4.7.x86_64 Fri Jan 26 15:27:27 2018
libvirt-daemon-driver-storage-mpath-3.2.0-14.el7_4.7.x86_64 Fri Jan 26 15:27:27 2018
libvirt-daemon-driver-storage-rbd-3.2.0-14.el7_4.7.x86_64 Fri Jan 26 15:27:27 2018
libvirt-daemon-driver-storage-scsi-3.2.0-14.el7_4.7.x86_64 Fri Jan 26 15:27:27 2018
libvirt-daemon-kvm-3.2.0-14.el7_4.7.x86_64 Fri Jan 26 15:27:29 2018
libvirt-libs-3.2.0-14.el7_4.7.x86_64 Fri Jan 26 15:26:00 2018
libvirt-python-3.2.0-3.el7_4.1.x86_64 Fri Jan 26 15:26:04 2018
openstack-nova-api-16.0.2-9.el7ost.noarch Fri Jan 26 15:28:29 2018
openstack-nova-common-16.0.2-9.el7ost.noarch Fri Jan 26 15:28:20 2018
openstack-nova-compute-16.0.2-9.el7ost.noarch Fri Jan 26 15:28:21 2018
openstack-nova-conductor-16.0.2-9.el7ost.noarch Fri Jan 26 15:28:29 2018
openstack-nova-console-16.0.2-9.el7ost.noarch Fri Jan 26 15:28:29 2018
openstack-nova-migration-16.0.2-9.el7ost.noarch Fri Jan 26 15:28:28 2018
openstack-nova-novncproxy-16.0.2-9.el7ost.noarch Fri Jan 26 15:28:28 2018
openstack-nova-placement-api-16.0.2-9.el7ost.noarch Fri Jan 26 15:28:29 2018
openstack-nova-scheduler-16.0.2-9.el7ost.noarch Fri Jan 26 15:28:30 2018
puppet-nova-11.4.0-2.el7ost.noarch Fri Jan 26 15:34:26 2018
python-nova-16.0.2-9.el7ost.noarch Fri Jan 26 15:28:19 2018
python-novaclient-9.1.1-1.el7ost.noarch Fri Jan 26 15:27:39 2018
qemu-guest-agent-2.8.0-2.el7.x86_64 Fri Jan 26 14:56:57 2018
qemu-img-rhev-2.9.0-16.el7_4.13.x86_64 Fri Jan 26 15:26:03 2018
qemu-kvm-common-rhev-2.9.0-16.el7_4.13.x86_64 Fri Jan 26 15:26:07 2018
qemu-kvm-rhev-2.9.0-16.el7_4.13.x86_64 Fri Jan 26 15:27:16 2018
```
To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1750680/+subscriptions
References