← Back to team overview

yahoo-eng-team team mailing list archive

[Bug 1609984] Re: volume-detach fails for shelved instance

 

Reviewed:  https://review.openstack.org/257853
Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=33510d4be990417d2a3a428106f6f745db5af6ed
Submitter: Jenkins
Branch:    master

commit 33510d4be990417d2a3a428106f6f745db5af6ed
Author: Andrea Rosa <andrea.rosa@xxxxxx>
Date:   Tue Dec 15 11:46:10 2015 +0000

    Use stashed volume connector in _local_cleanup_bdm_volumes
    
    When we perform a local delete in the compute API during the volumes
    cleanup, Nova calls the volume_api.terminate_connection passing a fake
    volume connector. That call is useless and it has no real effect on the
    volume server side.
    
    With a686185fc02ec421fd27270a343c19f668b95da6 in mitaka we started
    stashing the connector in the bdm.connection_info so this change
    tries to use that if it's available, which it won't be for attachments
    made before that change -or- for volumes attached to an instance in
    shelved_offloaded state that were never unshelved (because the actual
    volume attach for those instances happens on the compute manager after
    you unshelve the instance).
    
    If we can't find a stashed connector, or we find one whose host does
    not match the instance.host, we punt and just don't call
    terminate_connection at all since calling it with a fake connector
    can actually make some cinder volume backends fail and then the
    volume is orphaned.
    
    Closes-Bug: #1609984
    
    Co-Authored-By: Matt Riedemann <mriedem@xxxxxxxxxx>
    
    Change-Id: I9f9ead51238e27fa45084c8e3d3edee76a8b0218


** Changed in: nova
       Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1609984

Title:
  volume-detach fails for shelved instance

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) mitaka series:
  Confirmed

Bug description:
  nova/compute/api.py::_local_cleanup_bdm_volumes passes a fake
  connector to Cinder to ask it to terminate a connection to a volume.
  Many Cinder volume drivers need a valid connector that has a real
  'host' value in order to terminate the connection on the array.

  The connector being passed in is:
  'connector': {u'ip': u'127.0.0.1', u'initiator': u'iqn.fake'}

  
  2016-08-04 13:56:41.672 DEBUG cinder.volume.drivers.hpe.hpe_3par_iscsi [req-6a382dfe-d1a5-47e7-99bc-e2a383124cd8 aa5ab308cd5b47eb9b2798ec9e2abb32 4b136f9898994fec81393c3b8210980b] ==> terminate_connection: call {'volume': <cinder.db.sqla
  lchemy.models.Volume object at 0x7f1f2f130d10>, 'connector': {u'ip': u'127.0.0.1', u'initiator': u'iqn.fake'}, 'self': <cinder.volume.drivers.hpe.hpe_3par_iscsi.HPE3PARISCSIDriver object at 0x7f1ee0858950>, 'kwargs': {'force': False}} fr
  om (pid=45144) trace_logging_wrapper /opt/stack/cinder/cinder/utils.py:843
  2016-08-04 13:56:41.705 DEBUG cinder.volume.drivers.hpe.hpe_3par_common [req-6a382dfe-d1a5-47e7-99bc-e2a383124cd8 aa5ab308cd5b47eb9b2798ec9e2abb32 4b136f9898994fec81393c3b8210980b] Connecting to 3PAR from (pid=45144) client_login /opt/st
  ack/cinder/cinder/volume/drivers/hpe/hpe_3par_common.py:350
  2016-08-04 13:56:42.164 DEBUG cinder.volume.drivers.hpe.hpe_3par_common [req-6a382dfe-d1a5-47e7-99bc-e2a383124cd8 aa5ab308cd5b47eb9b2798ec9e2abb32 4b136f9898994fec81393c3b8210980b] Disconnect from 3PAR REST and SSH 1278aedb-8579-4776-8d8
  5-c46ec93a0551 from (pid=45144) client_logout /opt/stack/cinder/cinder/volume/drivers/hpe/hpe_3par_common.py:374
  2016-08-04 13:56:42.187 DEBUG cinder.volume.drivers.hpe.hpe_3par_iscsi [req-6a382dfe-d1a5-47e7-99bc-e2a383124cd8 aa5ab308cd5b47eb9b2798ec9e2abb32 4b136f9898994fec81393c3b8210980b] <== terminate_connection: exception (513ms) KeyError('hos
  t',) from (pid=45144) trace_logging_wrapper /opt/stack/cinder/cinder/utils.py:853
  2016-08-04 13:56:42.188 ERROR cinder.volume.manager [req-6a382dfe-d1a5-47e7-99bc-e2a383124cd8 aa5ab308cd5b47eb9b2798ec9e2abb32 4b136f9898994fec81393c3b8210980b] Terminate volume connection failed: 'host'
  2016-08-04 13:56:42.188 TRACE cinder.volume.manager Traceback (most recent call last):
  2016-08-04 13:56:42.188 TRACE cinder.volume.manager   File "/opt/stack/cinder/cinder/volume/manager.py", line 1457, in terminate_connection
  2016-08-04 13:56:42.188 TRACE cinder.volume.manager     force=force)
  2016-08-04 13:56:42.188 TRACE cinder.volume.manager   File "/opt/stack/cinder/cinder/utils.py", line 847, in trace_logging_wrapper
  2016-08-04 13:56:42.188 TRACE cinder.volume.manager     result = f(*args, **kwargs)
  2016-08-04 13:56:42.188 TRACE cinder.volume.manager   File "/opt/stack/cinder/cinder/volume/drivers/hpe/hpe_3par_iscsi.py", line 478, in terminate_connection
  2016-08-04 13:56:42.188 TRACE cinder.volume.manager     hostname = common._safe_hostname(connector['host'])
  2016-08-04 13:56:42.188 TRACE cinder.volume.manager KeyError: 'host'
  2016-08-04 13:56:42.188 TRACE cinder.volume.manager 
  2016-08-04 13:56:42.193 ERROR oslo_messaging.rpc.server [req-6a382dfe-d1a5-47e7-99bc-e2a383124cd8 aa5ab308cd5b47eb9b2798ec9e2abb32 4b136f9898994fec81393c3b8210980b] Exception during message handling
  2016-08-04 13:56:42.193 TRACE oslo_messaging.rpc.server Traceback (most recent call last):
  2016-08-04 13:56:42.193 TRACE oslo_messaging.rpc.server   File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 133, in _process_incoming
  2016-08-04 13:56:42.193 TRACE oslo_messaging.rpc.server     res = self.dispatcher.dispatch(message)
  2016-08-04 13:56:42.193 TRACE oslo_messaging.rpc.server   File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 150, in dispatch
  2016-08-04 13:56:42.193 TRACE oslo_messaging.rpc.server     return self._do_dispatch(endpoint, method, ctxt, args)
  2016-08-04 13:56:42.193 TRACE oslo_messaging.rpc.server   File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 121, in _do_dispatch
  2016-08-04 13:56:42.193 TRACE oslo_messaging.rpc.server     result = func(ctxt, **new_args)
  2016-08-04 13:56:42.193 TRACE oslo_messaging.rpc.server   File "/opt/stack/cinder/cinder/volume/manager.py", line 1462, in terminate_connection
  2016-08-04 13:56:42.193 TRACE oslo_messaging.rpc.server     raise exception.VolumeBackendAPIException(data=err_msg)
  2016-08-04 13:56:42.193 TRACE oslo_messaging.rpc.server VolumeBackendAPIException: Bad or unexpected response from the storage volume backend API: Terminate volume connection failed: 'host'
  2016-08-04 13:56:42.193 TRACE oslo_messaging.rpc.server

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1609984/+subscriptions


References