← Back to team overview

yahoo-eng-team team mailing list archive

[Bug 1392773] Re: Live migration of volume backed instances broken after upgrade to Juno

 

This is marked as Fix Released but is still broken in stable/juno. I got
an email saying the change had been abandoned?

** Changed in: nova
       Status: Fix Released => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1392773

Title:
  Live migration of volume backed instances broken after upgrade to Juno

Status in OpenStack Compute (Nova):
  Confirmed

Bug description:
  I'm running nova in a virtualenv with a checkout of stable/juno:

  root@compute1:/opt/openstack/src/nova# git branch
    stable/icehouse
  * stable/juno
  root@compute1:/opt/openstack/src/nova# git rev-list stable/juno | head -n 1
  54330ce33ee31bbd84162f0af3a6c74003d57329

  Since upgrading from icehouse, our iscsi backed instances are no
  longer able to live migrate, throwing exceptions like:

  Traceback (most recent call last):
    File "/opt/openstack/venv/nova/local/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 134, in _dispatch_and_reply
      incoming.message))
    File "/opt/openstack/venv/nova/local/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 177, in _dispatch
      return self._do_dispatch(endpoint, method, ctxt, args)
    File "/opt/openstack/venv/nova/local/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 123, in _do_dispatch
      result = getattr(endpoint, method)(ctxt, **new_args)
    File "/opt/openstack/venv/nova/local/lib/python2.7/site-packages/nova/exception.py", line 88, in wrapped
      payload)
    File "/opt/openstack/venv/nova/local/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 82, in __exit__
      six.reraise(self.type_, self.value, self.tb)
    File "/opt/openstack/venv/nova/local/lib/python2.7/site-packages/nova/exception.py", line 71, in wrapped
      return f(self, context, *args, **kw)
    File "/opt/openstack/venv/nova/local/lib/python2.7/site-packages/nova/compute/manager.py", line 326, in decorated_function
      kwargs['instance'], e, sys.exc_info())
    File "/opt/openstack/venv/nova/local/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 82, in __exit__
      six.reraise(self.type_, self.value, self.tb)
    File "/opt/openstack/venv/nova/local/lib/python2.7/site-packages/nova/compute/manager.py", line 314, in decorated_function
      return function(self, context, *args, **kwargs)
    File "/opt/openstack/venv/nova/local/lib/python2.7/site-packages/nova/compute/manager.py", line 4882, in check_can_live_migrate_source
      dest_check_data)
    File "/opt/openstack/venv/nova/local/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5040, in check_can_live_migrate_source
      raise exception.InvalidSharedStorage(reason=reason, path=source)
  InvalidSharedStorage: compute2 is not on shared storage: Live migration can not be used without shared storage.

  Looking back through the code, given dest_check_data like this:

  {u'disk_over_commit': False, u'disk_available_mb': None,
  u'image_type': u'default', u'filename': u'tmpyrUUg1',
  u'block_migration': False, 'is_volume_backed': True}

  In Icehouse the code to validate the request skipped this[0]:
       elif not shared and (not is_volume_backed or has_local_disks):

  In Juno, it matches this[1]:

   if (dest_check_data.get('is_volume_backed') and
    not bool(jsonutils.loads(
      self.get_instance_disk_info(instance['name'])))):

  In Juno at least, get_instance_disk_info returns something like this:

  [{u'disk_size': 10737418240, u'type': u'raw', u'virt_disk_size':
  10737418240, u'path': u'/dev/disk/by-path/ip-10.0.0.1:3260-iscsi-
  iqn.2010-10.org.openstack:volume-10f2302c-26b6-44e0-a3ea-
  7033d1091470-lun-1', u'backing_file': u'',
  u'over_committed_disk_size': 0}]

  I wonder if that was previously an empty return value in Icehouse, I'm
  unable to test right now, but if it returned the same then I'm not
  sure how it ever worked before.

  This is a lab environment, the volume storage is an LVM+ISCSI cinder
  service. nova.conf and cinder.conf here[2]

  [0]: https://github.com/openstack/nova/blob/stable/icehouse/nova/virt/libvirt/driver.py#L4299
  [1]: https://github.com/openstack/nova/blob/stable/juno/nova/virt/libvirt/driver.py#L5073
  [2]: https://gist.github.com/DazWorrall/b1b1e906a6dc2338f6c1

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1392773/+subscriptions


References