yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #33762
[Bug 1465416] [NEW] os-assisted-volume-snapshots:delete doesn't work if instance is SHUTOFF
Public bug reported:
If the instance is in SHUTOFF state, volume state is 'in-use', so a
volume driver for a NAS storage decides to call os-assisted-volume-
snapshots:delete.
The only driver, which supports this API is libvirt, so we go to
LibvirtDriver.volume_snapshot_delete. Which in turn calls
result = virt_dom.blockRebase(rebase_disk, rebase_base,
rebase_bw, rebase_flags)
Which raises an exception if a domain is not running:
volume_snapshot_delete: delete_info: {u'type': u'qcow2', u'merge_target_file': None, u'file_to_merge': None, u'volume_id': u'e650a0cb-abbf-4bb3-843e-9fb762953c7e'} from (pid=20313) _volume_snapshot_delete /opt/stack/nova/nova/virt/libvirt/driver.py:1826
found device at vda from (pid=20313) _volume_snapshot_delete /opt/stack/nova/nova/virt/libvirt/driver.py:1875
disk: vda, base: None, bw: 0, flags: 0 from (pid=20313) _volume_snapshot_delete /opt/stack/nova/nova/virt/libvirt/driver.py:1947
Error occurred during volume_snapshot_delete, sending error status to Cinder.
Traceback (most recent call last):
File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 2020, in volume_snapshot_delete
snapshot_id, delete_info=delete_info)
File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 1950, in _volume_snapshot_delete
rebase_bw, rebase_flags)
File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 183, in doit
result = proxy_call(self._autowrap, f, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 141, in proxy_call
rv = execute(f, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 122, in execute
six.reraise(c, e, tb)
File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 80, in tworker
rv = meth(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/libvirt.py", line 865, in blockRebase
if ret == -1: raise libvirtError ('virDomainBlockRebase() failed', dom=self)
libvirtError: Requested operation is not valid: domain is not running
I'm, using devstack, which checked out openstack's repos on 15.06.2015.
I'm experiencing the problem with my new volume driver https://review.openstack.org/#/c/188869/8 , but glusterfs and quobyte volume drivers are surely have the same bug.
** Affects: nova
Importance: Undecided
Status: New
** Description changed:
If the instance is in SHUTOFF state, volume state is 'in-use', so a
volume driver for a NAS storage decides to call os-assisted-volume-
snapshots:delete.
The only driver, which supports this API is libvirt, so we go to
LibvirtDriver.volume_snapshot_delete. Which in turn calls
- result = virt_dom.blockRebase(rebase_disk, rebase_base,
- rebase_bw, rebase_flags)
-
+ result = virt_dom.blockRebase(rebase_disk, rebase_base,
+ rebase_bw, rebase_flags)
Which raises an exception if a domain is not running:
- 015-06-16 00:58:48.155 DEBUG nova.virt.libvirt.driver [req-8cee70dd-2808-4fa6-88da-7f1bb9e0e370 nova service] volume_snapshot_delete: delete_info: {u'type': u'qcow2', u'merge_target_file': None, u'file_to_merge': None, u'volume_id': u'e650a0cb-abbf-4bb3-843e-9fb762953c7e'} from (pid=20313) _volume_snapshot_delete /opt/stack/nova/nova/virt/libvirt/driver.py:1826
- 2015-06-16 00:58:48.156 DEBUG nova.virt.libvirt.driver [req-8cee70dd-2808-4fa6-88da-7f1bb9e0e370 nova service] found device at vda from (pid=20313) _volume_snapshot_delete /opt/stack/nova/nova/virt/libvirt/driver.py:1875
- 2015-06-16 00:58:48.156 DEBUG nova.virt.libvirt.driver [req-8cee70dd-2808-4fa6-88da-7f1bb9e0e370 nova service] disk: vda, base: None, bw: 0, flags: 0 from (pid=20313) _volume_snapshot_delete /opt/stack/nova/nova/virt/libvirt/driver.py:1947
- 2015-06-16 00:58:48.157 ERROR nova.virt.libvirt.driver [req-8cee70dd-2808-4fa6-88da-7f1bb9e0e370 nova service] Error occurred during volume_snapshot_delete, sending error status to Cinder.
- 2015-06-16 00:58:48.157 TRACE nova.virt.libvirt.driver Traceback (most recent call last):
- 2015-06-16 00:58:48.157 TRACE nova.virt.libvirt.driver File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 2020, in volume_snapshot_delete
- 2015-06-16 00:58:48.157 TRACE nova.virt.libvirt.driver snapshot_id, delete_info=delete_info)
- 2015-06-16 00:58:48.157 TRACE nova.virt.libvirt.driver File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 1950, in _volume_snapshot_delete
- 2015-06-16 00:58:48.157 TRACE nova.virt.libvirt.driver rebase_bw, rebase_flags)
- 2015-06-16 00:58:48.157 TRACE nova.virt.libvirt.driver File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 183, in doit
- 2015-06-16 00:58:48.157 TRACE nova.virt.libvirt.driver result = proxy_call(self._autowrap, f, *args, **kwargs)
- 2015-06-16 00:58:48.157 TRACE nova.virt.libvirt.driver File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 141, in proxy_call
- 2015-06-16 00:58:48.157 TRACE nova.virt.libvirt.driver rv = execute(f, *args, **kwargs)
- 2015-06-16 00:58:48.157 TRACE nova.virt.libvirt.driver File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 122, in execute
- 2015-06-16 00:58:48.157 TRACE nova.virt.libvirt.driver six.reraise(c, e, tb)
- 2015-06-16 00:58:48.157 TRACE nova.virt.libvirt.driver File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 80, in tworker
- 2015-06-16 00:58:48.157 TRACE nova.virt.libvirt.driver rv = meth(*args, **kwargs)
- 2015-06-16 00:58:48.157 TRACE nova.virt.libvirt.driver File "/usr/lib/python2.7/site-packages/libvirt.py", line 865, in blockRebase
- 2015-06-16 00:58:48.157 TRACE nova.virt.libvirt.driver if ret == -1: raise libvirtError ('virDomainBlockRebase() failed', dom=self)
- 2015-06-16 00:58:48.157 TRACE nova.virt.libvirt.driver libvirtError: Requested operation is not valid: domain is not running
+ volume_snapshot_delete: delete_info: {u'type': u'qcow2', u'merge_target_file': None, u'file_to_merge': None, u'volume_id': u'e650a0cb-abbf-4bb3-843e-9fb762953c7e'} from (pid=20313) _volume_snapshot_delete /opt/stack/nova/nova/virt/libvirt/driver.py:1826
+ found device at vda from (pid=20313) _volume_snapshot_delete /opt/stack/nova/nova/virt/libvirt/driver.py:1875
+ disk: vda, base: None, bw: 0, flags: 0 from (pid=20313) _volume_snapshot_delete /opt/stack/nova/nova/virt/libvirt/driver.py:1947
+ Error occurred during volume_snapshot_delete, sending error status to Cinder.
+ Traceback (most recent call last):
+ File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 2020, in volume_snapshot_delete
+ snapshot_id, delete_info=delete_info)
+ File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 1950, in _volume_snapshot_delete
+ rebase_bw, rebase_flags)
+ File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 183, in doit
+ result = proxy_call(self._autowrap, f, *args, **kwargs)
+ File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 141, in proxy_call
+ rv = execute(f, *args, **kwargs)
+ File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 122, in execute
+ six.reraise(c, e, tb)
+ File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 80, in tworker
+ rv = meth(*args, **kwargs)
+ File "/usr/lib/python2.7/site-packages/libvirt.py", line 865, in blockRebase
+ if ret == -1: raise libvirtError ('virDomainBlockRebase() failed', dom=self)
+ libvirtError: Requested operation is not valid: domain is not running
I'm, using devstack, which checked out openstack's repos on 15.06.2015.
I'm experiencing the problem with my new volume driver https://review.openstack.org/#/c/188869/8 , but glusterfs and quobyte volume drivers are surely have the same bug.
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1465416
Title:
os-assisted-volume-snapshots:delete doesn't work if instance is
SHUTOFF
Status in OpenStack Compute (Nova):
New
Bug description:
If the instance is in SHUTOFF state, volume state is 'in-use', so a
volume driver for a NAS storage decides to call os-assisted-volume-
snapshots:delete.
The only driver, which supports this API is libvirt, so we go to
LibvirtDriver.volume_snapshot_delete. Which in turn calls
result = virt_dom.blockRebase(rebase_disk, rebase_base,
rebase_bw, rebase_flags)
Which raises an exception if a domain is not running:
volume_snapshot_delete: delete_info: {u'type': u'qcow2', u'merge_target_file': None, u'file_to_merge': None, u'volume_id': u'e650a0cb-abbf-4bb3-843e-9fb762953c7e'} from (pid=20313) _volume_snapshot_delete /opt/stack/nova/nova/virt/libvirt/driver.py:1826
found device at vda from (pid=20313) _volume_snapshot_delete /opt/stack/nova/nova/virt/libvirt/driver.py:1875
disk: vda, base: None, bw: 0, flags: 0 from (pid=20313) _volume_snapshot_delete /opt/stack/nova/nova/virt/libvirt/driver.py:1947
Error occurred during volume_snapshot_delete, sending error status to Cinder.
Traceback (most recent call last):
File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 2020, in volume_snapshot_delete
snapshot_id, delete_info=delete_info)
File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 1950, in _volume_snapshot_delete
rebase_bw, rebase_flags)
File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 183, in doit
result = proxy_call(self._autowrap, f, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 141, in proxy_call
rv = execute(f, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 122, in execute
six.reraise(c, e, tb)
File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 80, in tworker
rv = meth(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/libvirt.py", line 865, in blockRebase
if ret == -1: raise libvirtError ('virDomainBlockRebase() failed', dom=self)
libvirtError: Requested operation is not valid: domain is not running
I'm, using devstack, which checked out openstack's repos on 15.06.2015.
I'm experiencing the problem with my new volume driver https://review.openstack.org/#/c/188869/8 , but glusterfs and quobyte volume drivers are surely have the same bug.
To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1465416/+subscriptions
Follow ups
References