yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #08673
[Bug 1217972] Re: xenapi: VBD detach failure
** Changed in: nova
Status: Fix Committed => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1217972
Title:
xenapi: VBD detach failure
Status in OpenStack Compute (Nova):
Fix Released
Bug description:
This text appears in compute log:
DEBUG nova.virt.xenapi.vm_utils [req-909291ef-dbf7-4ed7-a1f4-f4b613117ac9 demo demo] Plugging VBD OpaqueRef:8d37c8d2-5a00-1442-8f19-3c97c3d9a751 ... from (pid=26384) vdi_attached_here /opt/stack/nova/nova/virt/xenapi/vm_utils.py:1911
[req-e64ceb14-3cd9-4734-b1d1-9cb416034503 demo demo] ['INTERNAL_ERROR', 'File "xapi_xenops.ml", line 2088, characters 3-9: Assertion failed']
Traceback (most recent call last):
File "/opt/stack/nova/nova/virt/xenapi/vm_utils.py", line 352, in unplug_vbd
session.call_xenapi('VBD.unplug', vbd_ref)
File "/opt/stack/nova/nova/virt/xenapi/driver.py", line 719, in call_xenapi
return session.xenapi_request(method, args)
File "/usr/local/lib/python2.7/dist-packages/XenAPI.py", line 133, in xenapi_request
result = _parse_result(getattr(self, methodname)(*full_params))
File "/usr/local/lib/python2.7/dist-packages/XenAPI.py", line 203, in _parse_result
raise Failure(result['ErrorDescription'])
Failure: ['INTERNAL_ERROR', 'File "xapi_xenops.ml", line 2088, characters 3-9: Assertion failed']
...
[req-e64ceb14-3cd9-4734-b1d1-9cb416034503 demo demo] ['OPERATION_NOT_ALLOWED', "VBD '0d03a77b-6ae3-2e16-1a63-d771e374f513' still attached to '98bbd5ba-dc99-6c96-32be-b170cf8c9dd6'"]
Traceback (most recent call last):
File "/opt/stack/nova/nova/virt/xenapi/vm_utils.py", line 379, in destroy_vbd
session.call_xenapi('VBD.destroy', vbd_ref)
File "/opt/stack/nova/nova/virt/xenapi/driver.py", line 719, in call_xenapi
return session.xenapi_request(method, args)
File "/usr/local/lib/python2.7/dist-packages/XenAPI.py", line 133, in xenapi_request
result = _parse_result(getattr(self, methodname)(*full_params))
File "/usr/local/lib/python2.7/dist-packages/XenAPI.py", line 203, in _parse_result
raise Failure(result['ErrorDescription'])
Failure: ['OPERATION_NOT_ALLOWED', "VBD '0d03a77b-6ae3-2e16-1a63-d771e374f513' still attached to '98bbd5ba-dc99-6c96-32be-b170cf8c9dd6'"]
[req-e64ceb14-3cd9-4734-b1d1-9cb416034503 demo demo] Destroying VBD for VDI OpaqueRef:d93e1482-aba5-e919-7733-7db2f3d2ccd6 done. from (pid=26384) vdi_attached_here /opt/stack/nova/nova/virt/xenapi/vm_utils.py:1934
[req-e64ceb14-3cd9-4734-b1d1-9cb416034503 demo demo] [instance: f2b36c4f-0c67-4a26-954c-aef3d2b9f7d8] Failed to spawn, rolling back
[instance: f2b36c4f-0c67-4a26-954c-aef3d2b9f7d8] Traceback (most recent call last):
[instance: f2b36c4f-0c67-4a26-954c-aef3d2b9f7d8] File "/opt/stack/nova/nova/virt/xenapi/vmops.py", line 498, in spawn
[instance: f2b36c4f-0c67-4a26-954c-aef3d2b9f7d8] kernel_file, ramdisk_file = create_kernel_ramdisk_step(undo_mgr)
[instance: f2b36c4f-0c67-4a26-954c-aef3d2b9f7d8] File "/opt/stack/nova/nova/virt/xenapi/vmops.py", line 153, in inner
[instance: f2b36c4f-0c67-4a26-954c-aef3d2b9f7d8] rv = f(*args, **kwargs)
[instance: f2b36c4f-0c67-4a26-954c-aef3d2b9f7d8] File "/opt/stack/nova/nova/virt/xenapi/vmops.py", line 390, in create_kernel_ramdisk_step
[instance: f2b36c4f-0c67-4a26-954c-aef3d2b9f7d8] instance, context, name_label)
[instance: f2b36c4f-0c67-4a26-954c-aef3d2b9f7d8] File "/opt/stack/nova/nova/virt/xenapi/vmops.py", line 191, in _create_kernel_and_ramdisk
[instance: f2b36c4f-0c67-4a26-954c-aef3d2b9f7d8] vm_utils.ImageType.KERNEL)
[instance: f2b36c4f-0c67-4a26-954c-aef3d2b9f7d8] File "/opt/stack/nova/nova/virt/xenapi/vm_utils.py", line 1019, in create_kernel_image
[instance: f2b36c4f-0c67-4a26-954c-aef3d2b9f7d8] image_id, image_type)
[instance: f2b36c4f-0c67-4a26-954c-aef3d2b9f7d8] File "/opt/stack/nova/nova/virt/xenapi/vm_utils.py", line 1331, in _fetch_disk_image
[instance: f2b36c4f-0c67-4a26-954c-aef3d2b9f7d8] session, image.stream_to, image_type, virtual_size, dev)
[instance: f2b36c4f-0c67-4a26-954c-aef3d2b9f7d8] File "/usr/lib/python2.7/contextlib.py", line 24, in __exit__
[instance: f2b36c4f-0c67-4a26-954c-aef3d2b9f7d8] self.gen.next()
[instance: f2b36c4f-0c67-4a26-954c-aef3d2b9f7d8] File "/opt/stack/nova/nova/virt/xenapi/vm_utils.py", line 1927, in vdi_attached_here
[instance: f2b36c4f-0c67-4a26-954c-aef3d2b9f7d8] unplug_vbd(session, vbd_ref)
[instance: f2b36c4f-0c67-4a26-954c-aef3d2b9f7d8] File "/opt/stack/nova/nova/virt/xenapi/vm_utils.py", line 367, in unplug_vbd
[instance: f2b36c4f-0c67-4a26-954c-aef3d2b9f7d8] _('Unable to unplug VBD %s') % vbd_ref)
[instance: f2b36c4f-0c67-4a26-954c-aef3d2b9f7d8] StorageError: Unable to unplug VBD OpaqueRef:f0fafa75-763d-db2d-d022-6e16b9808e44
[instance: f2b36c4f-0c67-4a26-954c-aef3d2b9f7d8]
[req-e64ceb14-3cd9-4734-b1d1-9cb416034503 demo demo] [instance: f2b36c4f-0c67-4a26-954c-aef3d2b9f7d8] Instance failed to spawn
[instance: f2b36c4f-0c67-4a26-954c-aef3d2b9f7d8] Traceback (most recent call last):
[instance: f2b36c4f-0c67-4a26-954c-aef3d2b9f7d8] File "/opt/stack/nova/nova/compute/manager.py", line 1286, in _spawn
[instance: f2b36c4f-0c67-4a26-954c-aef3d2b9f7d8] block_device_info)
[instance: f2b36c4f-0c67-4a26-954c-aef3d2b9f7d8] File "/opt/stack/nova/nova/virt/xenapi/driver.py", line 180, in spawn
[instance: f2b36c4f-0c67-4a26-954c-aef3d2b9f7d8] admin_password, network_info, block_device_info)
[instance: f2b36c4f-0c67-4a26-954c-aef3d2b9f7d8] File "/opt/stack/nova/nova/virt/xenapi/vmops.py", line 514, in spawn
[instance: f2b36c4f-0c67-4a26-954c-aef3d2b9f7d8] undo_mgr.rollback_and_reraise(msg=msg, instance=instance)
[instance: f2b36c4f-0c67-4a26-954c-aef3d2b9f7d8] File "/opt/stack/nova/nova/utils.py", line 981, in rollback_and_reraise
[instance: f2b36c4f-0c67-4a26-954c-aef3d2b9f7d8] self._rollback()
[instance: f2b36c4f-0c67-4a26-954c-aef3d2b9f7d8] File "/opt/stack/nova/nova/virt/xenapi/vmops.py", line 498, in spawn
[instance: f2b36c4f-0c67-4a26-954c-aef3d2b9f7d8] kernel_file, ramdisk_file = create_kernel_ramdisk_step(undo_mgr)
[instance: f2b36c4f-0c67-4a26-954c-aef3d2b9f7d8] File "/opt/stack/nova/nova/virt/xenapi/vmops.py", line 153, in inner
[instance: f2b36c4f-0c67-4a26-954c-aef3d2b9f7d8] rv = f(*args, **kwargs)
[instance: f2b36c4f-0c67-4a26-954c-aef3d2b9f7d8] File "/opt/stack/nova/nova/virt/xenapi/vmops.py", line 390, in create_kernel_ramdisk_step
[instance: f2b36c4f-0c67-4a26-954c-aef3d2b9f7d8] instance, context, name_label)
[instance: f2b36c4f-0c67-4a26-954c-aef3d2b9f7d8] File "/opt/stack/nova/nova/virt/xenapi/vmops.py", line 191, in _create_kernel_and_ramdisk
[instance: f2b36c4f-0c67-4a26-954c-aef3d2b9f7d8] vm_utils.ImageType.KERNEL)
[instance: f2b36c4f-0c67-4a26-954c-aef3d2b9f7d8] File "/opt/stack/nova/nova/virt/xenapi/vm_utils.py", line 1019, in create_kernel_image
[instance: f2b36c4f-0c67-4a26-954c-aef3d2b9f7d8] image_id, image_type)
[instance: f2b36c4f-0c67-4a26-954c-aef3d2b9f7d8] File "/opt/stack/nova/nova/virt/xenapi/vm_utils.py", line 1331, in _fetch_disk_image
[instance: f2b36c4f-0c67-4a26-954c-aef3d2b9f7d8] session, image.stream_to, image_type, virtual_size, dev)
[instance: f2b36c4f-0c67-4a26-954c-aef3d2b9f7d8] File "/usr/lib/python2.7/contextlib.py", line 24, in __exit__
[instance: f2b36c4f-0c67-4a26-954c-aef3d2b9f7d8] self.gen.next()
[instance: f2b36c4f-0c67-4a26-954c-aef3d2b9f7d8] File "/opt/stack/nova/nova/virt/xenapi/vm_utils.py", line 1927, in vdi_attached_here
[instance: f2b36c4f-0c67-4a26-954c-aef3d2b9f7d8] unplug_vbd(session, vbd_ref)
[instance: f2b36c4f-0c67-4a26-954c-aef3d2b9f7d8] File "/opt/stack/nova/nova/virt/xenapi/vm_utils.py", line 367, in unplug_vbd
[instance: f2b36c4f-0c67-4a26-954c-aef3d2b9f7d8] _('Unable to unplug VBD %s') % vbd_ref)
[instance: f2b36c4f-0c67-4a26-954c-aef3d2b9f7d8] StorageError: Unable to unplug VBD OpaqueRef:f0fafa75-763d-db2d-d022-6e16b9808e44
[instance: f2b36c4f-0c67-4a26-954c-aef3d2b9f7d8]
[req-e64ceb14-3cd9-4734-b1d1-9cb416034503 demo demo] [instance: f2b36c4f-0c67-4a26-954c-aef3d2b9f7d8] Aborting claim: [Claim: 512 MB memory, 1 GB disk, 1 VCPUS] from (pid=26384) abort /opt/stack/nova/nova/compute/claims.py:97
Looking at vm_utils.py:
def unplug_vbd(session, vbd_ref):
"""Unplug VBD from VM."""
# Call VBD.unplug on the given VBD, with a retry if we get
# DEVICE_DETACH_REJECTED. For reasons which we don't understand,
# we're seeing the device still in use, even when all processes
# using the device should be dead.
max_attempts = CONF.xenapi_num_vbd_unplug_retries + 1
for num_attempt in xrange(1, max_attempts + 1):
try:
session.call_xenapi('VBD.unplug', vbd_ref)
return
except session.XenAPI.Failure as exc:
err = len(exc.details) > 0 and exc.details[0]
if err == 'DEVICE_ALREADY_DETACHED':
LOG.info(_('VBD %s already detached'), vbd_ref)
return
elif err == 'DEVICE_DETACH_REJECTED':
LOG.info(_('VBD %(vbd_ref)s detach rejected, attempt'
' %(num_attempt)d/%(max_attempts)d'),
{'vbd_ref': vbd_ref, 'num_attempt': num_attempt,
'max_attempts': max_attempts})
else:
LOG.exception(exc)
raise volume_utils.StorageError(
_('Unable to unplug VBD %s') % vbd_ref)
greenthread.sleep(1)
raise volume_utils.StorageError(
_('Reached maximum number of retries trying to unplug VBD %s')
% vbd_ref)
It is prepared to handle such situations. Maybe xapi returns with a
different error code?
My XenServer is 6.2:
[root@megadodo ~]# cat /etc/xensource-inventory
BUILD_NUMBER='70446c'
PLATFORM_VERSION='1.8.0'
DOM0_VCPUS='4'
INSTALLATION_UUID='4d944fd3-8b48-47a1-9b13-155ae6923a37'
MANAGEMENT_ADDRESS_TYPE='IPv4'
PRODUCT_VERSION_TEXT_SHORT='6.2'
BRAND_CONSOLE='XenCenter'
PRIMARY_DISK='/dev/disk/by-id/scsi-SATA_SAMSUNG_HE253GJ_S2B5J90ZC12322'
PRODUCT_BRAND='XenServer'
INSTALLATION_DATE='2013-08-21 08:05:18.227510'
PLATFORM_NAME='XCP'
COMPANY_NAME_SHORT='Citrix'
PRODUCT_VERSION_TEXT='6.2'
BACKUP_PARTITION='/dev/disk/by-id/scsi-SATA_SAMSUNG_HE253GJ_S2B5J90ZC12322-part2'
PRODUCT_VERSION='6.2.0'
XEN_VERSION='4.1.5'
KERNEL_VERSION='2.6.32.43-0.4.1.xs1.8.0.835.170778xen'
MANAGEMENT_INTERFACE='xenbr0'
DOM0_MEM='752'
COMPANY_NAME='Citrix Systems, Inc.'
PRODUCT_NAME='xenenterprise'
CONTROL_DOMAIN_UUID='4e04643c-0506-408c-9a84-e45fe055ce90'
To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1217972/+subscriptions