← Back to team overview

yahoo-eng-team team mailing list archive

[Bug 1288039] Re: live-migration cinder boot volume target_lun id incorrect

 

** Also affects: nova/kilo
   Importance: Undecided
       Status: New

** Changed in: nova/kilo
       Status: New => Fix Committed

** Changed in: nova/kilo
    Milestone: None => 2015.1.2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1288039

Title:
  live-migration cinder boot volume target_lun id incorrect

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Committed

Bug description:
  When nova goes to cleanup _post_live_migration on the source host, the
  block_device_mapping has incorrect data.

  I can reproduce this 100% of the time with a cinder iSCSI backend,
  such as 3PAR.

  This is a Fresh install on 2 new servers with no attached storage from Cinder and no VMs.
  I create a cinder volume from an image. 
  I create a VM booted from that Cinder volume.  That vm shows up on host1 with a LUN id of 0.
  I live migrate that vm.   The vm moves to host 2 and has a LUN id of 0.   The LUN on host1 is now gone.

  I create another cinder volume from image.
  I create another VM booted from the 2nd cinder volume.  The vm shows up on host1 with a LUN id of 0.  
  I live migrate that vm.  The VM moves to host 2 and has a LUN id of 1.  
  _post_live_migrate is called on host1 to clean up, and gets failures, because it's asking cinder to delete the volume
  on host1 with a target_lun id of 1, which doesn't exist.  It's supposed to be asking cinder to detach LUN 0.

  First migrate
  HOST2
  2014-03-04 19:02:07.870 WARNING nova.compute.manager [req-24521cb1-8719-4bc5-b488-73a4980d7110 admin admin] pre_live_migrate: {'block_device_mapping': [{'guest_format': None, 'boot_index': 0, 'mount_device': u'vda', 'connection_info': {u'd
  river_volume_type': u'iscsi', 'serial': u'83fb6f13-905e-45f8-a465-508cb343b721', u'data': {u'target_discovered': True, u'qos_specs': None, u'target_iqn': u'iqn.2000-05.com.3pardata:20810002ac00383d', u'target_portal': u'10.10.120.253:3260'
  , u'target_lun': 0, u'access_mode': u'rw'}}, 'disk_bus': u'virtio', 'device_type': u'disk', 'delete_on_termination': False}]}
  HOST1
  2014-03-04 19:02:16.775 WARNING nova.compute.manager [-] _post_live_migration: block_device_info {'block_device_mapping': [{'guest_format': None, 'boot_index': 0, 'mount_device': u'vda', 'connection_info': {u'driver_volume_type': u'iscsi',
   u'serial': u'83fb6f13-905e-45f8-a465-508cb343b721', u'data': {u'target_discovered': True, u'qos_specs': None, u'target_iqn': u'iqn.2000-05.com.3pardata:20810002ac00383d', u'target_portal': u'10.10.120.253:3260', u'target_lun': 0, u'access_mode': u'rw'}}, 'disk_bus': u'virtio', 'device_type': u'disk', 'delete_on_termination': False}]}



  Second Migration
  This is in _post_live_migration on the host1.  It calls libvirt's driver.py post_live_migration with the volume information returned from the new volume on host2, hence the target_lun = 1.   It should be calling libvirt's driver.py to clean up the original volume on the source host, which has a target_lun = 0.
  2014-03-04 19:24:51.626 WARNING nova.compute.manager [-] _post_live_migration: block_device_info {'block_device_mapping': [{'guest_format': None, 'boot_index': 0, 'mount_device': u'vda', 'connection_info': {u'driver_volume_type': u'iscsi', u'serial': u'f0087595-804d-4bdb-9bad-0da2166313ea', u'data': {u'target_discovered': True, u'qos_specs': None, u'target_iqn': u'iqn.2000-05.com.3pardata:20810002ac00383d', u'target_portal': u'10.10.120.253:3260', u'target_lun': 1, u'access_mode': u'rw'}}, 'disk_bus': u'virtio', 'device_type': u'disk', 'delete_on_termination': False}]}

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1288039/+subscriptions


References