← Back to team overview

yahoo-eng-team team mailing list archive

[Bug 1430364] Re: instance live migrate with iSER connected volume fails

 

Per comment 1 live migration with mellanox wasn't supported in the level
of code tested here.  If this is still an issue with cold migration then
please open a new bug - but make sure it's tested against the latest
liberty code.

** Changed in: nova
       Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1430364

Title:
  instance live migrate with iSER connected volume fails

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Instances migrate with iSER drived volume fails. Probably due
  different path of attached iSER volume because of different pci-id of
  infiniband controllers:

  trace:

  Mar 10 14:23:37 compute2 nova-compute: Traceback (most recent call last):
  Mar 10 14:23:37 compute2 nova-compute: File "/usr/lib/python2.7/site-packages/eventlet/hubs/poll.py", line 115, in wait
  Mar 10 14:23:37 compute2 nova-compute: listener.cb(fileno)
  Mar 10 14:23:37 compute2 nova-compute: File "/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 212, in main
  Mar 10 14:23:37 compute2 nova-compute: result = function(*args, **kwargs)
  Mar 10 14:23:37 compute2 nova-compute: File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5415, in _live_migration
  Mar 10 14:23:37 compute2 nova-compute: recover_method(context, instance, dest, block_migration)
  Mar 10 14:23:37 compute2 nova-compute: File "/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 82, in __exit__
  Mar 10 14:23:37 compute2 nova-compute: six.reraise(self.type_, self.value, self.tb)
  Mar 10 14:23:37 compute2 nova-compute: File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5382, in _live_migration
  Mar 10 14:23:37 compute2 nova-compute: CONF.libvirt.live_migration_bandwidth)
  Mar 10 14:23:37 compute2 nova-compute: File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 183, in doit
  Mar 10 14:23:37 compute2 nova-compute: result = proxy_call(self._autowrap, f, *args, **kwargs)
  Mar 10 14:23:37 compute2 nova-compute: File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 141, in proxy_call
  Mar 10 14:23:37 compute2 nova-compute: rv = execute(f, *args, **kwargs)
  Mar 10 14:23:37 compute2 nova-compute: File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 122, in execute
  Mar 10 14:23:37 compute2 nova-compute: six.reraise(c, e, tb)
  Mar 10 14:23:37 compute2 nova-compute: File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 80, in tworker
  Mar 10 14:23:37 compute2 nova-compute: rv = meth(*args, **kwargs)
  Mar 10 14:23:37 compute2 nova-compute: File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1264, in migrateToURI2
  Mar 10 14:23:37 compute2 nova-compute: if ret == -1: raise libvirtError ('virDomainMigrateToURI2() failed', dom=self)
  Mar 10 14:23:37 compute2 nova-compute: libvirtError: Failed to open file '/dev/disk/by-path/pci-0000:08:00.0-ip-10.2.11.12:3260-iscsi-iqn.2010-10.org.iser.openstack:volume-cf40905c-0418-4a36-8f41-7bfe5a3767b7-lun-1': No such file or directory

  Volume is correctly connected to destination compute node, but it has
  different path:

  migration source node:

  pci-0000:08:00.0-ip-10.2.11.12:3260-iscsi-
  iqn.2010-10.org.iser.openstack:volume-
  cf40905c-0418-4a36-8f41-7bfe5a3767b7-lun-1 -> ../../sde

  migration destination node:

  pci-0000:05:00.0-ip-10.2.11.12:3260-iscsi-
  iqn.2010-10.org.iser.openstack:volume-
  cf40905c-0418-4a36-8f41-7bfe5a3767b7-lun-1 -> ../../sdf

  
  Both nodes are not the same and have different pci tree which affect /dev/disk/by-path naming scheme

  source node:
  [root@compute2 by-path]# lspci | grep Mellanox
  08:00.0 InfiniBand: Mellanox Technologies MT26428 [ConnectX VPI PCIe 2.0 5GT/s - IB QDR / 10GigE] (rev b0)

  dest node:
  [root@compute1 by-path]# lspci  | grep Mellanox
  05:00.0 InfiniBand: Mellanox Technologies MT26428 [ConnectX VPI PCIe 2.0 5GT/s - IB QDR / 10GigE] (rev a0)

  PCI number of infiniband controller correspond to disk/by-path scheme.

  
  The same situation occur with iSER volume backed instance.

  
  running Centos 7 with stable Juno

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1430364/+subscriptions


References