yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #33771
[Bug 1465416] Re: os-assisted-volume-snapshots:delete doesn't work if instance is SHUTOFF
Just wondering if the solution has to be part of Nova. Are you thinking
of any check in Nova verifying that the instance is running before
calling libvirt ?
** Changed in: nova
Status: New => Opinion
** Tags added: libvirt
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1465416
Title:
os-assisted-volume-snapshots:delete doesn't work if instance is
SHUTOFF
Status in OpenStack Compute (Nova):
Opinion
Bug description:
If the instance is in SHUTOFF state, volume state is 'in-use', so a
volume driver for a NAS storage decides to call os-assisted-volume-
snapshots:delete.
The only driver, which supports this API is libvirt, so we go to
LibvirtDriver.volume_snapshot_delete. Which in turn calls
result = virt_dom.blockRebase(rebase_disk, rebase_base,
rebase_bw, rebase_flags)
Which raises an exception if a domain is not running:
volume_snapshot_delete: delete_info: {u'type': u'qcow2', u'merge_target_file': None, u'file_to_merge': None, u'volume_id': u'e650a0cb-abbf-4bb3-843e-9fb762953c7e'} from (pid=20313) _volume_snapshot_delete /opt/stack/nova/nova/virt/libvirt/driver.py:1826
found device at vda from (pid=20313) _volume_snapshot_delete /opt/stack/nova/nova/virt/libvirt/driver.py:1875
disk: vda, base: None, bw: 0, flags: 0 from (pid=20313) _volume_snapshot_delete /opt/stack/nova/nova/virt/libvirt/driver.py:1947
Error occurred during volume_snapshot_delete, sending error status to Cinder.
Traceback (most recent call last):
File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 2020, in volume_snapshot_delete
snapshot_id, delete_info=delete_info)
File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 1950, in _volume_snapshot_delete
rebase_bw, rebase_flags)
File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 183, in doit
result = proxy_call(self._autowrap, f, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 141, in proxy_call
rv = execute(f, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 122, in execute
six.reraise(c, e, tb)
File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 80, in tworker
rv = meth(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/libvirt.py", line 865, in blockRebase
if ret == -1: raise libvirtError ('virDomainBlockRebase() failed', dom=self)
libvirtError: Requested operation is not valid: domain is not running
I'm, using devstack, which checked out openstack's repos on 15.06.2015.
I'm experiencing the problem with my new volume driver https://review.openstack.org/#/c/188869/8 , but glusterfs and quobyte volume drivers are surely have the same bug.
To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1465416/+subscriptions
References