yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #16409
[Bug 1332198] [NEW] Volumes are not detached when a build fails
Public bug reported:
When a build fails in the driver spawn method attached volumes are not
detached. If the instance goes to ERROR and is later deleted everything
gets cleaned up appropriately. If the instance is rescheduled then the
next compute will fail with:
2014-06-18 20:09:01.954 11008 TRACE nova.compute.manager [instance: be78cd0e-c67f-439c-bf30-885fb135d9ad] Traceback (most recent call last):
2014-06-18 20:09:01.954 11008 TRACE nova.compute.manager [instance: be78cd0e-c67f-439c-bf30-885fb135d9ad] File "/opt/rackstack/806.0/nova/lib/python2.6/site-packages/nova/compute/manager.py", line 1786, in _prep_block_device
2014-06-18 20:09:01.954 11008 TRACE nova.compute.manager [instance: be78cd0e-c67f-439c-bf30-885fb135d9ad] self.driver, self._await_block_device_map_created) +
2014-06-18 20:09:01.954 11008 TRACE nova.compute.manager [instance: be78cd0e-c67f-439c-bf30-885fb135d9ad] File "/opt/rackstack/806.0/nova/lib/python2.6/site-packages/nova/virt/block_device.py", line 368, in attach_block_devices
2014-06-18 20:09:01.954 11008 TRACE nova.compute.manager [instance: be78cd0e-c67f-439c-bf30-885fb135d9ad] map(_log_and_attach, block_device_mapping)
2014-06-18 20:09:01.954 11008 TRACE nova.compute.manager [instance: be78cd0e-c67f-439c-bf30-885fb135d9ad] File "/opt/rackstack/806.0/nova/lib/python2.6/site-packages/nova/virt/block_device.py", line 366, in _log_and_attach
2014-06-18 20:09:01.954 11008 TRACE nova.compute.manager [instance: be78cd0e-c67f-439c-bf30-885fb135d9ad] bdm.attach(*attach_args, **attach_kwargs)
2014-06-18 20:09:01.954 11008 TRACE nova.compute.manager [instance: be78cd0e-c67f-439c-bf30-885fb135d9ad] File "/opt/rackstack/806.0/nova/lib/python2.6/site-packages/nova/virt/block_device.py", line 45, in wrapped
2014-06-18 20:09:01.954 11008 TRACE nova.compute.manager [instance: be78cd0e-c67f-439c-bf30-885fb135d9ad] ret_val = method(obj, context, *args, **kwargs)
2014-06-18 20:09:01.954 11008 TRACE nova.compute.manager [instance: be78cd0e-c67f-439c-bf30-885fb135d9ad] File "/opt/rackstack/806.0/nova/lib/python2.6/site-packages/nova/virt/block_device.py", line 218, in attach
2014-06-18 20:09:01.954 11008 TRACE nova.compute.manager [instance: be78cd0e-c67f-439c-bf30-885fb135d9ad] volume_api.check_attach(context, volume, instance=instance)
2014-06-18 20:09:01.954 11008 TRACE nova.compute.manager [instance: be78cd0e-c67f-439c-bf30-885fb135d9ad] File "/opt/rackstack/806.0/nova/lib/python2.6/site-packages/nova/volume/cinder.py", line 249, in check_attach
2014-06-18 20:09:01.954 11008 TRACE nova.compute.manager [instance: be78cd0e-c67f-439c-bf30-885fb135d9ad] raise exception.InvalidVolume(reason=msg)
2014-06-18 20:09:01.954 11008 TRACE nova.compute.manager [instance: be78cd0e-c67f-439c-bf30-885fb135d9ad] InvalidVolume: Invalid volume: status must be 'available'
2014-06-18 20:09:01.954 11008 TRACE nova.compute.manager [instance: be78cd0e-c67f-439c-bf30-885fb135d9ad]
2014-06-18 20:09:02.002 11008 ERROR nova.compute.manager [req-e76e85f6-0520-4372-b47d-a80744c912a7 None] [instance: be78cd0e-c67f-439c-bf30-885fb135d9ad] Failure prepping block device
which stops the build and properly stops a reschedule.
Cinder volumes need to be detached on a build failure.
** Affects: nova
Importance: High
Assignee: Andrew Laski (alaski)
Status: In Progress
** Changed in: nova
Importance: Undecided => High
** Changed in: nova
Status: New => In Progress
** Changed in: nova
Assignee: (unassigned) => Andrew Laski (alaski)
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1332198
Title:
Volumes are not detached when a build fails
Status in OpenStack Compute (Nova):
In Progress
Bug description:
When a build fails in the driver spawn method attached volumes are not
detached. If the instance goes to ERROR and is later deleted
everything gets cleaned up appropriately. If the instance is
rescheduled then the next compute will fail with:
2014-06-18 20:09:01.954 11008 TRACE nova.compute.manager [instance: be78cd0e-c67f-439c-bf30-885fb135d9ad] Traceback (most recent call last):
2014-06-18 20:09:01.954 11008 TRACE nova.compute.manager [instance: be78cd0e-c67f-439c-bf30-885fb135d9ad] File "/opt/rackstack/806.0/nova/lib/python2.6/site-packages/nova/compute/manager.py", line 1786, in _prep_block_device
2014-06-18 20:09:01.954 11008 TRACE nova.compute.manager [instance: be78cd0e-c67f-439c-bf30-885fb135d9ad] self.driver, self._await_block_device_map_created) +
2014-06-18 20:09:01.954 11008 TRACE nova.compute.manager [instance: be78cd0e-c67f-439c-bf30-885fb135d9ad] File "/opt/rackstack/806.0/nova/lib/python2.6/site-packages/nova/virt/block_device.py", line 368, in attach_block_devices
2014-06-18 20:09:01.954 11008 TRACE nova.compute.manager [instance: be78cd0e-c67f-439c-bf30-885fb135d9ad] map(_log_and_attach, block_device_mapping)
2014-06-18 20:09:01.954 11008 TRACE nova.compute.manager [instance: be78cd0e-c67f-439c-bf30-885fb135d9ad] File "/opt/rackstack/806.0/nova/lib/python2.6/site-packages/nova/virt/block_device.py", line 366, in _log_and_attach
2014-06-18 20:09:01.954 11008 TRACE nova.compute.manager [instance: be78cd0e-c67f-439c-bf30-885fb135d9ad] bdm.attach(*attach_args, **attach_kwargs)
2014-06-18 20:09:01.954 11008 TRACE nova.compute.manager [instance: be78cd0e-c67f-439c-bf30-885fb135d9ad] File "/opt/rackstack/806.0/nova/lib/python2.6/site-packages/nova/virt/block_device.py", line 45, in wrapped
2014-06-18 20:09:01.954 11008 TRACE nova.compute.manager [instance: be78cd0e-c67f-439c-bf30-885fb135d9ad] ret_val = method(obj, context, *args, **kwargs)
2014-06-18 20:09:01.954 11008 TRACE nova.compute.manager [instance: be78cd0e-c67f-439c-bf30-885fb135d9ad] File "/opt/rackstack/806.0/nova/lib/python2.6/site-packages/nova/virt/block_device.py", line 218, in attach
2014-06-18 20:09:01.954 11008 TRACE nova.compute.manager [instance: be78cd0e-c67f-439c-bf30-885fb135d9ad] volume_api.check_attach(context, volume, instance=instance)
2014-06-18 20:09:01.954 11008 TRACE nova.compute.manager [instance: be78cd0e-c67f-439c-bf30-885fb135d9ad] File "/opt/rackstack/806.0/nova/lib/python2.6/site-packages/nova/volume/cinder.py", line 249, in check_attach
2014-06-18 20:09:01.954 11008 TRACE nova.compute.manager [instance: be78cd0e-c67f-439c-bf30-885fb135d9ad] raise exception.InvalidVolume(reason=msg)
2014-06-18 20:09:01.954 11008 TRACE nova.compute.manager [instance: be78cd0e-c67f-439c-bf30-885fb135d9ad] InvalidVolume: Invalid volume: status must be 'available'
2014-06-18 20:09:01.954 11008 TRACE nova.compute.manager [instance: be78cd0e-c67f-439c-bf30-885fb135d9ad]
2014-06-18 20:09:02.002 11008 ERROR nova.compute.manager [req-e76e85f6-0520-4372-b47d-a80744c912a7 None] [instance: be78cd0e-c67f-439c-bf30-885fb135d9ad] Failure prepping block device
which stops the build and properly stops a reschedule.
Cinder volumes need to be detached on a build failure.
To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1332198/+subscriptions
Follow ups
References