yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #07169
[Bug 1213215] Re: ServerRescueTest tearDownClass fails with volume status being in-use
** Also affects: cinder
Importance: Undecided
Status: New
** Changed in: cinder
Importance: Undecided => High
** Changed in: nova
Importance: Undecided => High
** Changed in: cinder
Status: New => Confirmed
** Changed in: nova
Status: New => Confirmed
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1213215
Title:
ServerRescueTest tearDownClass fails with volume status being in-use
Status in Cinder:
Confirmed
Status in OpenStack Compute (Nova):
Confirmed
Status in Tempest:
Confirmed
Bug description:
Occasionally running tempest in parallel will fail several tests with
timeout errors. The only nontimeout failure message is that the
ServerRescueTest failed to delete a volume because it was still marked
as in use. My guess is that the leftover volume is somehow interfering
with the other tests causing them to timeout. But, I haven't looked at
the logs in detail so it's just a wild guess.
2013-08-16 14:11:42.074 | ======================================================================
2013-08-16 14:11:42.075 | FAIL: tempest.api.compute.servers.test_disk_config.ServerDiskConfigTestJSON.test_rebuild_server_with_auto_disk_config[gate]
2013-08-16 14:11:42.075 | tempest.api.compute.servers.test_disk_config.ServerDiskConfigTestJSON.test_rebuild_server_with_auto_disk_config[gate]
2013-08-16 14:11:42.075 | ----------------------------------------------------------------------
2013-08-16 14:11:42.075 | _StringException: Empty attachments:
2013-08-16 14:11:42.075 | stderr
2013-08-16 14:11:42.076 | stdout
2013-08-16 14:11:42.076 |
2013-08-16 14:11:42.076 | Traceback (most recent call last):
2013-08-16 14:11:42.076 | File "tempest/api/compute/servers/test_disk_config.py", line 64, in test_rebuild_server_with_auto_disk_config
2013-08-16 14:11:42.076 | wait_until='ACTIVE')
2013-08-16 14:11:42.076 | File "tempest/api/compute/base.py", line 140, in create_server
2013-08-16 14:11:42.076 | server['id'], kwargs['wait_until'])
2013-08-16 14:11:42.077 | File "tempest/services/compute/json/servers_client.py", line 160, in wait_for_server_status
2013-08-16 14:11:42.077 | time.sleep(self.build_interval)
2013-08-16 14:11:42.077 | File "/usr/local/lib/python2.7/dist-packages/fixtures/_fixtures/timeout.py", line 52, in signal_handler
2013-08-16 14:11:42.077 | raise TimeoutException()
2013-08-16 14:11:42.077 | TimeoutException
2013-08-16 14:11:42.077 |
2013-08-16 14:11:42.077 |
2013-08-16 14:11:42.078 | ======================================================================
2013-08-16 14:11:42.078 | FAIL: setUpClass (tempest.api.compute.images.test_image_metadata.ImagesMetadataTestXML)
2013-08-16 14:11:42.078 | setUpClass (tempest.api.compute.images.test_image_metadata.ImagesMetadataTestXML)
2013-08-16 14:11:42.078 | ----------------------------------------------------------------------
2013-08-16 14:11:42.078 | _StringException: Traceback (most recent call last):
2013-08-16 14:11:42.078 | File "tempest/api/compute/images/test_image_metadata.py", line 46, in setUpClass
2013-08-16 14:11:42.078 | cls.client.wait_for_image_status(cls.image_id, 'ACTIVE')
2013-08-16 14:11:42.079 | File "tempest/services/compute/xml/images_client.py", line 167, in wait_for_image_status
2013-08-16 14:11:42.079 | raise exceptions.TimeoutException
2013-08-16 14:11:42.079 | TimeoutException: Request timed out
2013-08-16 14:11:42.079 |
2013-08-16 14:11:42.079 |
2013-08-16 14:11:42.079 | ======================================================================
2013-08-16 14:11:42.079 | FAIL: tempest.api.compute.servers.test_server_rescue.ServerRescueTestJSON.test_rescued_vm_detach_volume[gate,negative]
2013-08-16 14:11:42.080 | tempest.api.compute.servers.test_server_rescue.ServerRescueTestJSON.test_rescued_vm_detach_volume[gate,negative]
2013-08-16 14:11:42.080 | ----------------------------------------------------------------------
2013-08-16 14:11:42.080 | _StringException: Empty attachments:
2013-08-16 14:11:42.080 | stderr
2013-08-16 14:11:42.080 | stdout
2013-08-16 14:11:42.080 |
2013-08-16 14:11:42.081 | Traceback (most recent call last):
2013-08-16 14:11:42.081 | File "tempest/api/compute/servers/test_server_rescue.py", line 184, in test_rescued_vm_detach_volume
2013-08-16 14:11:42.081 | self.servers_client.wait_for_server_status(self.server_id, 'RESCUE')
2013-08-16 14:11:42.081 | File "tempest/services/compute/json/servers_client.py", line 160, in wait_for_server_status
2013-08-16 14:11:42.081 | time.sleep(self.build_interval)
2013-08-16 14:11:42.081 | File "/usr/local/lib/python2.7/dist-packages/fixtures/_fixtures/timeout.py", line 52, in signal_handler
2013-08-16 14:11:42.081 | raise TimeoutException()
2013-08-16 14:11:42.082 | TimeoutException
2013-08-16 14:11:42.082 |
2013-08-16 14:11:42.082 |
2013-08-16 14:11:42.082 | ======================================================================
2013-08-16 14:11:42.082 | FAIL: tearDownClass (tempest.api.compute.servers.test_server_rescue.ServerRescueTestJSON)
2013-08-16 14:11:42.082 | tearDownClass (tempest.api.compute.servers.test_server_rescue.ServerRescueTestJSON)
2013-08-16 14:11:42.082 | ----------------------------------------------------------------------
2013-08-16 14:11:42.083 | _StringException: Traceback (most recent call last):
2013-08-16 14:11:42.083 | File "tempest/api/compute/servers/test_server_rescue.py", line 92, in tearDownClass
2013-08-16 14:11:42.083 | client.delete_volume(str(cls.volume_to_detach['id']).strip())
2013-08-16 14:11:42.083 | File "tempest/services/compute/json/volumes_extensions_client.py", line 84, in delete_volume
2013-08-16 14:11:42.083 | return self.delete("os-volumes/%s" % str(volume_id))
2013-08-16 14:11:42.083 | File "tempest/common/rest_client.py", line 265, in delete
2013-08-16 14:11:42.083 | return self.request('DELETE', url, headers)
2013-08-16 14:11:42.084 | File "tempest/common/rest_client.py", line 387, in request
2013-08-16 14:11:42.084 | resp, resp_body)
2013-08-16 14:11:42.084 | File "tempest/common/rest_client.py", line 437, in _error_checker
2013-08-16 14:11:42.084 | raise exceptions.BadRequest(resp_body)
2013-08-16 14:11:42.084 | BadRequest: Bad request
2013-08-16 14:11:42.084 | Details: {u'badRequest': {u'message': u'Invalid input received: Invalid volume: Volume status must be available or error, but current status is: in-use', u'code': 400}}
2013-08-16 14:11:42.084 |
2013-08-16 14:11:42.085 |
2013-08-16 14:11:42.085 | ======================================================================
2013-08-16 14:11:42.085 | FAIL: process-returncode
2013-08-16 14:11:42.085 | process-returncode
2013-08-16 14:11:42.085 | ----------------------------------------------------------------------
2013-08-16 14:11:42.085 | _StringException: Binary content:
2013-08-16 14:11:42.085 | traceback (test/plain; charset="utf8")
2013-08-16 14:11:42.086 |
2013-08-16 14:11:42.086 |
2013-08-16 14:11:42.086 | ======================================================================
2013-08-16 14:11:42.086 | FAIL: process-returncode
2013-08-16 14:11:42.086 | process-returncode
2013-08-16 14:11:42.086 | ----------------------------------------------------------------------
2013-08-16 14:11:42.086 | _StringException: Binary content:
2013-08-16 14:11:42.087 | traceback (test/plain; charset="utf8")
2013-08-16 14:11:42.087 |
2013-08-16 14:11:42.087 |
2013-08-16 14:11:42.087 | ======================================================================
2013-08-16 14:11:42.087 | FAIL: process-returncode
2013-08-16 14:11:42.087 | process-returncode
2013-08-16 14:11:42.087 | ----------------------------------------------------------------------
2013-08-16 14:11:42.088 | _StringException: Binary content:
2013-08-16 14:11:42.088 | traceback (test/plain; charset="utf8")
2013-08-16 14:11:42.088 |
A set of failing logs can be found here:
http://logs.openstack.org/97/41397/5/gate/gate-tempest-devstack-vm-testr-full/a6ab053/
To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1213215/+subscriptions