yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #03932
[Bug 1201418] Re: Volume "in-use" although VM doesn't exist
This is likely part of the cleanup on the BDM side or in the caching.
There are some other issues related to this, like failed attach never
cleaning up on the compute side.
** Changed in: cinder
Status: New => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1201418
Title:
Volume "in-use" although VM doesn't exist
Status in Cinder:
Invalid
Status in OpenStack Compute (Nova):
New
Bug description:
Setup:
devstack on master using default settings.
Steps:
1) Using tempest/stress with patch https://review.openstack.org/#/c/36652/:
cd /opt/stack/tempest/tempest/stress
./run_stress.py etc/volume-assign-delete-test.json -d 60
2) Test will do the following work flow:
- create a volume
- create a VM
- attach volume to VM
- delete VM
- delete volume
Problem:
Deletion of volume causes problem, since the state is still "in-use"
even the VM is already deleted:
2013-07-15 12:30:58,563 31273 tempest.stress : INFO creating volume: volume663095989
2013-07-15 12:30:59,992 31273 tempest.stress : INFO created volume: cb4d625c-c4d8-43ee-9bdd-d4fa4e1d2c60
2013-07-15 12:30:59,993 31273 tempest.stress : INFO creating vm: instance331154488
2013-07-15 12:31:11,097 31273 tempest.stress : INFO created vm 4e20442b-8f72-482d-9e7c-59725748784b
2013-07-15 12:31:11,098 31273 tempest.stress : INFO attach volume (cb4d625c-c4d8-43ee-9bdd-d4fa4e1d2c60) to vm 4e20442b-8f72-482d-9e7c-59725748784b
2013-07-15 12:31:11,265 31273 tempest.stress : INFO volume (cb4d625c-c4d8-43ee-9bdd-d4fa4e1d2c60) attached to vm 4e20442b-8f72-482d-9e7c-59725748784b
2013-07-15 12:31:11,265 31273 tempest.stress : INFO deleting vm: instance331154488
2013-07-15 12:31:13,780 31273 tempest.stress : INFO deleted vm: 4e20442b-8f72-482d-9e7c-59725748784b
2013-07-15 12:31:13,781 31273 tempest.stress : INFO deleting volume: cb4d625c-c4d8-43ee-9bdd-d4fa4e1d2c60
Process Process-1:
Traceback (most recent call last):
File "/usr/lib/python2.7/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/usr/lib/python2.7/multiprocessing/process.py", line 114, in run
self._target(*self._args, **self._kwargs)
File "/opt/stack/tempest/tempest/stress/actions/volume_attach_delete.py", line 61, in create_delete
resp, _ = manager.volumes_client.delete_volume(volume['id'])
File "/opt/stack/tempest/tempest/services/volume/json/volumes_client.py", line 86, in delete_volume
return self.delete("volumes/%s" % str(volume_id))
File "/opt/stack/tempest/tempest/common/rest_client.py", line 264, in delete
return self.request('DELETE', url, headers)
File "/opt/stack/tempest/tempest/common/rest_client.py", line 386, in request
resp, resp_body)
File "/opt/stack/tempest/tempest/common/rest_client.py", line 436, in _error_checker
raise exceptions.BadRequest(resp_body)
BadRequest: Bad request
Details: {u'badRequest': {u'message': u'Invalid volume: Volume status must be available or error', u'code': 400}}
2013-07-15 12:31:58,622 31264 tempest.stress : INFO cleaning up
nova list:
+----+------+--------+------------+-------------+----------+
| ID | Name | Status | Task State | Power State | Networks |
+----+------+--------+------------+-------------+----------+
+----+------+--------+------------+-------------+----------+
cinder list
+--------------------------------------+--------+------------------+------+-------------+----------+--------------------------------------+
| ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+--------+------------------+------+-------------+----------+--------------------------------------+
| cb4d625c-c4d8-43ee-9bdd-d4fa4e1d2c60 | in-use | volume663095989 | 1 | None | False | 4e20442b-8f72-482d-9e7c-59725748784b |
+--------------------------------------+--------+------------------+------+-------------+----------+--------------------------------------+
To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1201418/+subscriptions