yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #50370
[Bug 1579667] [NEW] delete an shelved_offloaded server cause failure in cinder
Public bug reported:
When deleting an shelved_offloaded STATE VM instance with volume
attached, nova passes a connector dictionary:
connector = {'ip': '127.0.0.1', 'initiator': 'iqn.fake'}
to cinder for terminate connnection, this causes KeyError in cinder driver
code :
https://github.com/openstack/nova/blame/master/nova/compute/api.py#L1803
def _local_cleanup_bdm_volumes(self, bdms, instance, context):
1804 """The method deletes the bdm records and, if a bdm is a volume, call
1805 the terminate connection and the detach volume via the Volume API.
1806 Note that at this point we do not have the information about the
1807 correct connector so we pass a fake one.
1808 """
1809 elevated = context.elevated()
1810 for bdm in bdms:
1811 if bdm.is_volume:
1812 # NOTE(vish): We don't have access to correct volume
1813 # connector info, so just pass a fake
1814 # connector. This can be improved when we
1815 # expose get_volume_connector to rpc.
1816 connector = {'ip': '127.0.0.1', 'initiator': 'iqn.fake'}
1817 try:
1818 self.volume_api.terminate_connection(context,
1819 bdm.volume_id,
1820 connector)
1821 self.volume_api.detach(elevated, bdm.volume_id,
1822 instance.uuid)
1823 if bdm.delete_on_termination:
1824 self.volume_api.delete(context, bdm.volume_id)
1825 except Exception as exc:
1826 err_str = _LW("Ignoring volume cleanup failure due to %s")
1827 LOG.warn(err_str % exc, instance=instance)
1828 bdm.destroy()
1829
https://github.com/openstack/nova/blame/master/nova/compute/api.py#L1828
according to my debug, the connector info for terminate_connection is
already there( in bdm object):
so Nova should build correct connection info for terminate_connection.
======Steps to reproduce
1. create a server: nova boot ----
2. shelve the server: nova shelve <server_id>
3. delete the server: nova delete <server_id>
Thanks
Peter
** Affects: nova
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1579667
Title:
delete an shelved_offloaded server cause failure in cinder
Status in OpenStack Compute (nova):
New
Bug description:
When deleting an shelved_offloaded STATE VM instance with volume
attached, nova passes a connector dictionary:
connector = {'ip': '127.0.0.1', 'initiator': 'iqn.fake'}
to cinder for terminate connnection, this causes KeyError in cinder driver
code :
https://github.com/openstack/nova/blame/master/nova/compute/api.py#L1803
def _local_cleanup_bdm_volumes(self, bdms, instance, context):
1804 """The method deletes the bdm records and, if a bdm is a volume, call
1805 the terminate connection and the detach volume via the Volume API.
1806 Note that at this point we do not have the information about the
1807 correct connector so we pass a fake one.
1808 """
1809 elevated = context.elevated()
1810 for bdm in bdms:
1811 if bdm.is_volume:
1812 # NOTE(vish): We don't have access to correct volume
1813 # connector info, so just pass a fake
1814 # connector. This can be improved when we
1815 # expose get_volume_connector to rpc.
1816 connector = {'ip': '127.0.0.1', 'initiator': 'iqn.fake'}
1817 try:
1818 self.volume_api.terminate_connection(context,
1819 bdm.volume_id,
1820 connector)
1821 self.volume_api.detach(elevated, bdm.volume_id,
1822 instance.uuid)
1823 if bdm.delete_on_termination:
1824 self.volume_api.delete(context, bdm.volume_id)
1825 except Exception as exc:
1826 err_str = _LW("Ignoring volume cleanup failure due to %s")
1827 LOG.warn(err_str % exc, instance=instance)
1828 bdm.destroy()
1829
https://github.com/openstack/nova/blame/master/nova/compute/api.py#L1828
according to my debug, the connector info for terminate_connection is
already there( in bdm object):
so Nova should build correct connection info for terminate_connection.
======Steps to reproduce
1. create a server: nova boot ----
2. shelve the server: nova shelve <server_id>
3. delete the server: nova delete <server_id>
Thanks
Peter
To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1579667/+subscriptions
Follow ups