yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #22416
[Bug 1374458] Re: Race to delete volume in test_encrypted_cinder_volumes_luks
2014-09-26 03:32:17.403 ERROR nova.compute.manager [req-89e6d1c1-822a-4387-8af1-93c3e007bec6 TestEncryptedCinderVolumes-1122192554 TestEncryptedCinderVolumes-1697943167] [instance: 4374ceb4-1ac1-48dc-8c87-70d7118fc8c8] Failed to detach volume 704461b6-3421-4959-8113-a011e6410ede from /dev/vdb
2014-09-26 03:32:17.403 22504 TRACE nova.compute.manager [instance: 4374ceb4-1ac1-48dc-8c87-70d7118fc8c8] Traceback (most recent call last):
2014-09-26 03:32:17.403 22504 TRACE nova.compute.manager [instance: 4374ceb4-1ac1-48dc-8c87-70d7118fc8c8] File "/opt/stack/new/nova/nova/compute/manager.py", line 4216, in _detach_volume
2014-09-26 03:32:17.403 22504 TRACE nova.compute.manager [instance: 4374ceb4-1ac1-48dc-8c87-70d7118fc8c8] encryption=encryption)
2014-09-26 03:32:17.403 22504 TRACE nova.compute.manager [instance: 4374ceb4-1ac1-48dc-8c87-70d7118fc8c8] File "/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 1384, in detach_volume
2014-09-26 03:32:17.403 22504 TRACE nova.compute.manager [instance: 4374ceb4-1ac1-48dc-8c87-70d7118fc8c8] encryptor.detach_volume(**encryption)
2014-09-26 03:32:17.403 22504 TRACE nova.compute.manager [instance: 4374ceb4-1ac1-48dc-8c87-70d7118fc8c8] File "/opt/stack/new/nova/nova/volume/encryptors/cryptsetup.py", line 101, in detach_volume
2014-09-26 03:32:17.403 22504 TRACE nova.compute.manager [instance: 4374ceb4-1ac1-48dc-8c87-70d7118fc8c8] self._close_volume(**kwargs)
2014-09-26 03:32:17.403 22504 TRACE nova.compute.manager [instance: 4374ceb4-1ac1-48dc-8c87-70d7118fc8c8] File "/opt/stack/new/nova/nova/volume/encryptors/luks.py", line 106, in _close_volume
2014-09-26 03:32:17.403 22504 TRACE nova.compute.manager [instance: 4374ceb4-1ac1-48dc-8c87-70d7118fc8c8] run_as_root=True, check_exit_code=True)
2014-09-26 03:32:17.403 22504 TRACE nova.compute.manager [instance: 4374ceb4-1ac1-48dc-8c87-70d7118fc8c8] File "/opt/stack/new/nova/nova/utils.py", line 165, in execute
2014-09-26 03:32:17.403 22504 TRACE nova.compute.manager [instance: 4374ceb4-1ac1-48dc-8c87-70d7118fc8c8] return processutils.execute(*cmd, **kwargs)
2014-09-26 03:32:17.403 22504 TRACE nova.compute.manager [instance: 4374ceb4-1ac1-48dc-8c87-70d7118fc8c8] File "/opt/stack/new/nova/nova/openstack/common/processutils.py", line 193, in execute
2014-09-26 03:32:17.403 22504 TRACE nova.compute.manager [instance: 4374ceb4-1ac1-48dc-8c87-70d7118fc8c8] cmd=' '.join(cmd))
2014-09-26 03:32:17.403 22504 TRACE nova.compute.manager [instance: 4374ceb4-1ac1-48dc-8c87-70d7118fc8c8] ProcessExecutionError: Unexpected error while running command.
2014-09-26 03:32:17.403 22504 TRACE nova.compute.manager [instance: 4374ceb4-1ac1-48dc-8c87-70d7118fc8c8] Command: sudo nova-rootwrap /etc/nova/rootwrap.conf cryptsetup luksClose ip-127.0.0.1:3260-iscsi-iqn.2010-10.org.openstack:volume-704461b6-3421-4959-8113-a011e6410ede-lun-1
2014-09-26 03:32:17.403 22504 TRACE nova.compute.manager [instance: 4374ceb4-1ac1-48dc-8c87-70d7118fc8c8] Exit code: 5
2014-09-26 03:32:17.403 22504 TRACE nova.compute.manager [instance: 4374ceb4-1ac1-48dc-8c87-70d7118fc8c8] Stdout: ''
2014-09-26 03:32:17.403 22504 TRACE nova.compute.manager [instance: 4374ceb4-1ac1-48dc-8c87-70d7118fc8c8] Stderr: 'Device ip-127.0.0.1:3260-iscsi-iqn.2010-10.org.openstack:volume-704461b6-3421-4959-8113-a011e6410ede-lun-1 is busy.\n'
2014-09-26 03:32:17.403 22504 TRACE nova.compute.manager [instance: 4374ceb4-1ac1-48dc-8c87-70d7118fc8c8]
** No longer affects: cinder
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1374458
Title:
test_encrypted_cinder_volumes_luks fails to detach encrypted volume
Status in OpenStack Compute (Nova):
Confirmed
Bug description:
http://logs.openstack.org/98/124198/3/check/check-grenade-dsvm-
icehouse/c89f18f/console.html#_2014-09-26_03_38_56_940
2014-09-26 03:38:57.259 | Traceback (most recent call last):
2014-09-26 03:38:57.259 | File "tempest/scenario/manager.py", line 142, in delete_wrapper
2014-09-26 03:38:57.259 | delete_thing(*args, **kwargs)
2014-09-26 03:38:57.259 | File "tempest/services/volume/json/volumes_client.py", line 108, in delete_volume
2014-09-26 03:38:57.259 | resp, body = self.delete("volumes/%s" % str(volume_id))
2014-09-26 03:38:57.259 | File "tempest/common/rest_client.py", line 225, in delete
2014-09-26 03:38:57.259 | return self.request('DELETE', url, extra_headers, headers, body)
2014-09-26 03:38:57.259 | File "tempest/common/rest_client.py", line 435, in request
2014-09-26 03:38:57.259 | resp, resp_body)
2014-09-26 03:38:57.259 | File "tempest/common/rest_client.py", line 484, in _error_checker
2014-09-26 03:38:57.259 | raise exceptions.BadRequest(resp_body)
2014-09-26 03:38:57.259 | BadRequest: Bad request
2014-09-26 03:38:57.260 | Details: {u'message': u'Invalid volume: Volume status must be available or error, but current status is: in-use', u'code': 400}
2014-09-26 03:38:57.260 | }}}
2014-09-26 03:38:57.260 |
2014-09-26 03:38:57.260 | traceback-2: {{{
2014-09-26 03:38:57.260 | Traceback (most recent call last):
2014-09-26 03:38:57.260 | File "tempest/common/rest_client.py", line 561, in wait_for_resource_deletion
2014-09-26 03:38:57.260 | raise exceptions.TimeoutException(message)
2014-09-26 03:38:57.260 | TimeoutException: Request timed out
2014-09-26 03:38:57.260 | Details: (TestEncryptedCinderVolumes:_run_cleanups) Failed to delete resource 704461b6-3421-4959-8113-a011e6410ede within the required time (196 s).
2014-09-26 03:38:57.260 | }}}
2014-09-26 03:38:57.260 |
2014-09-26 03:38:57.261 | traceback-3: {{{
2014-09-26 03:38:57.261 | Traceback (most recent call last):
2014-09-26 03:38:57.261 | File "tempest/services/volume/json/admin/volume_types_client.py", line 97, in delete_volume_type
2014-09-26 03:38:57.261 | resp, body = self.delete("types/%s" % str(volume_id))
2014-09-26 03:38:57.261 | File "tempest/common/rest_client.py", line 225, in delete
2014-09-26 03:38:57.261 | return self.request('DELETE', url, extra_headers, headers, body)
2014-09-26 03:38:57.261 | File "tempest/common/rest_client.py", line 435, in request
2014-09-26 03:38:57.261 | resp, resp_body)
2014-09-26 03:38:57.261 | File "tempest/common/rest_client.py", line 484, in _error_checker
2014-09-26 03:38:57.261 | raise exceptions.BadRequest(resp_body)
2014-09-26 03:38:57.261 | BadRequest: Bad request
2014-09-26 03:38:57.261 | Details: {u'message': u'Target volume type is still in use.', u'code': 400}
2014-09-26 03:38:57.262 | }}}
2014-09-26 03:38:57.262 |
2014-09-26 03:38:57.262 | Traceback (most recent call last):
2014-09-26 03:38:57.262 | File "tempest/test.py", line 142, in wrapper
2014-09-26 03:38:57.262 | return f(self, *func_args, **func_kwargs)
2014-09-26 03:38:57.262 | File "tempest/scenario/test_encrypted_cinder_volumes.py", line 56, in test_encrypted_cinder_volumes_luks
2014-09-26 03:38:57.262 | self.attach_detach_volume()
2014-09-26 03:38:57.262 | File "tempest/scenario/test_encrypted_cinder_volumes.py", line 49, in attach_detach_volume
2014-09-26 03:38:57.262 | self.nova_volume_detach()
2014-09-26 03:38:57.262 | File "tempest/scenario/manager.py", line 439, in nova_volume_detach
2014-09-26 03:38:57.262 | 'available')
2014-09-26 03:38:57.262 | File "tempest/services/volume/json/volumes_client.py", line 181, in wait_for_volume_status
2014-09-26 03:38:57.263 | raise exceptions.TimeoutException(message)
2014-09-26 03:38:57.263 | TimeoutException: Request timed out
2014-09-26 03:38:57.263 | Details: Volume 704461b6-3421-4959-8113-a011e6410ede failed to reach available status within the required time (196 s).
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRGV0YWlsczogKFRlc3RFbmNyeXB0ZWRDaW5kZXJWb2x1bWVzOl9ydW5fY2xlYW51cHMpIEZhaWxlZCB0byBkZWxldGUgcmVzb3VyY2VcIiBBTkQgbWVzc2FnZTpcIndpdGhpbiB0aGUgcmVxdWlyZWQgdGltZVwiIEFORCB0YWdzOlwiY29uc29sZVwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI2MDQ4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDExNzM4OTc0MTMwfQ==
130 hits in 7 days, check and gate, all failures.
To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1374458/+subscriptions