yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #47492
[Bug 1445021] Re: nova-compute does not start after upgrade from juno->kilo if there are boot from volume servers running
*** This bug is a duplicate of bug 1416132 ***
https://bugs.launchpad.net/bugs/1416132
** This bug has been marked a duplicate of bug 1416132
_get_instance_disk_info fails to read files from NFS due to permissions
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1445021
Title:
nova-compute does not start after upgrade from juno->kilo if there are
boot from volume servers running
Status in OpenStack Compute (nova):
In Progress
Bug description:
Running: nova master in grenade tests
Relevant job that triggers this:
http://logs.openstack.org/91/173791/11/check/check-grenade-
dsvm/fc725f5/
This patch attempted to test the survivability of a "boot from volume"
system over the course of the upgrade, however when we tried to do
this a lot of tests failed.
It turns out that libvirt's device scan actually fails in this
situation after boot:
http://logs.openstack.org/91/173791/11/check/check-grenade-
dsvm/fc725f5/logs/new/screen-n-cpu.txt.gz#_2015-04-16_11_39_05_009
2015-04-16 11:39:05.009 ERROR nova.openstack.common.threadgroup [req-b09699d4-5d28-4eeb-a09c-412f48da3d68 None None] Unexpected error while running command.
Command: sudo nova-rootwrap /etc/nova/rootwrap.conf blockdev --getsize64 /dev/disk/by-path/ip-127.0.0.1:3260-iscsi-iqn.2010-10.org.openstack:volume-f1015aa4-1998-47c1-8ce6-625ca0fa2b8c-lun-1
Exit code: 1
Stdout: u''
Stderr: u'blockdev: cannot open /dev/disk/by-path/ip-127.0.0.1:3260-iscsi-iqn.2010-10.org.openstack:volume-f1015aa4-1998-47c1-8ce6-625ca0fa2b8c-lun-1: No such device or address\n'
2015-04-16 11:39:05.009 13951 TRACE nova.openstack.common.threadgroup Traceback (most recent call last):
2015-04-16 11:39:05.009 13951 TRACE nova.openstack.common.threadgroup File "/opt/stack/new/nova/nova/openstack/common/threadgroup.py", line 145, in wait
2015-04-16 11:39:05.009 13951 TRACE nova.openstack.common.threadgroup x.wait()
2015-04-16 11:39:05.009 13951 TRACE nova.openstack.common.threadgroup File "/opt/stack/new/nova/nova/openstack/common/threadgroup.py", line 47, in wait
2015-04-16 11:39:05.009 13951 TRACE nova.openstack.common.threadgroup return self.thread.wait()
2015-04-16 11:39:05.009 13951 TRACE nova.openstack.common.threadgroup File "/usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py", line 175, in wait
2015-04-16 11:39:05.009 13951 TRACE nova.openstack.common.threadgroup return self._exit_event.wait()
2015-04-16 11:39:05.009 13951 TRACE nova.openstack.common.threadgroup File "/usr/local/lib/python2.7/dist-packages/eventlet/event.py", line 121, in wait
2015-04-16 11:39:05.009 13951 TRACE nova.openstack.common.threadgroup return hubs.get_hub().switch()
2015-04-16 11:39:05.009 13951 TRACE nova.openstack.common.threadgroup File "/usr/local/lib/python2.7/dist-packages/eventlet/hubs/hub.py", line 294, in switch
2015-04-16 11:39:05.009 13951 TRACE nova.openstack.common.threadgroup return self.greenlet.switch()
2015-04-16 11:39:05.009 13951 TRACE nova.openstack.common.threadgroup File "/usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py", line 214, in main
2015-04-16 11:39:05.009 13951 TRACE nova.openstack.common.threadgroup result = function(*args, **kwargs)
2015-04-16 11:39:05.009 13951 TRACE nova.openstack.common.threadgroup File "/opt/stack/new/nova/nova/openstack/common/service.py", line 497, in run_service
2015-04-16 11:39:05.009 13951 TRACE nova.openstack.common.threadgroup service.start()
2015-04-16 11:39:05.009 13951 TRACE nova.openstack.common.threadgroup File "/opt/stack/new/nova/nova/service.py", line 183, in start
2015-04-16 11:39:05.009 13951 TRACE nova.openstack.common.threadgroup self.manager.pre_start_hook()
2015-04-16 11:39:05.009 13951 TRACE nova.openstack.common.threadgroup File "/opt/stack/new/nova/nova/compute/manager.py", line 1288, in pre_start_hook
2015-04-16 11:39:05.009 13951 TRACE nova.openstack.common.threadgroup self.update_available_resource(nova.context.get_admin_context())
2015-04-16 11:39:05.009 13951 TRACE nova.openstack.common.threadgroup File "/opt/stack/new/nova/nova/compute/manager.py", line 6237, in update_available_resource
2015-04-16 11:39:05.009 13951 TRACE nova.openstack.common.threadgroup rt.update_available_resource(context)
2015-04-16 11:39:05.009 13951 TRACE nova.openstack.common.threadgroup File "/opt/stack/new/nova/nova/compute/resource_tracker.py", line 376, in update_available_resource
2015-04-16 11:39:05.009 13951 TRACE nova.openstack.common.threadgroup resources = self.driver.get_available_resource(self.nodename)
2015-04-16 11:39:05.009 13951 TRACE nova.openstack.common.threadgroup File "/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 4908, in get_available_resource
2015-04-16 11:39:05.009 13951 TRACE nova.openstack.common.threadgroup disk_over_committed = self._get_disk_over_committed_size_total()
2015-04-16 11:39:05.009 13951 TRACE nova.openstack.common.threadgroup File "/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 6109, in _get_disk_over_committed_size_total
2015-04-16 11:39:05.009 13951 TRACE nova.openstack.common.threadgroup self._get_instance_disk_info(dom.name(), xml))
2015-04-16 11:39:05.009 13951 TRACE nova.openstack.common.threadgroup File "/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 6062, in _get_instance_disk_info
2015-04-16 11:39:05.009 13951 TRACE nova.openstack.common.threadgroup dk_size = lvm.get_volume_size(path)
2015-04-16 11:39:05.009 13951 TRACE nova.openstack.common.threadgroup File "/opt/stack/new/nova/nova/virt/libvirt/lvm.py", line 172, in get_volume_size
2015-04-16 11:39:05.009 13951 TRACE nova.openstack.common.threadgroup run_as_root=True)
2015-04-16 11:39:05.009 13951 TRACE nova.openstack.common.threadgroup File "/opt/stack/new/nova/nova/virt/libvirt/utils.py", line 55, in execute
2015-04-16 11:39:05.009 13951 TRACE nova.openstack.common.threadgroup return utils.execute(*args, **kwargs)
2015-04-16 11:39:05.009 13951 TRACE nova.openstack.common.threadgroup File "/opt/stack/new/nova/nova/utils.py", line 206, in execute
2015-04-16 11:39:05.009 13951 TRACE nova.openstack.common.threadgroup return processutils.execute(*cmd, **kwargs)
2015-04-16 11:39:05.009 13951 TRACE nova.openstack.common.threadgroup File "/usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py", line 233, in execute
2015-04-16 11:39:05.009 13951 TRACE nova.openstack.common.threadgroup cmd=sanitized_cmd)
2015-04-16 11:39:05.009 13951 TRACE nova.openstack.common.threadgroup ProcessExecutionError: Unexpected error while running command.
2015-04-16 11:39:05.009 13951 TRACE nova.openstack.common.threadgroup Command: sudo nova-rootwrap /etc/nova/rootwrap.conf blockdev --getsize64 /dev/disk/by-path/ip-127.0.0.1:3260-iscsi-iqn.2010-10.org.openstack:volume-f1015aa4-1998-47c1-8ce6-625ca0fa2b8c-lun-1
2015-04-16 11:39:05.009 13951 TRACE nova.openstack.common.threadgroup Exit code: 1
2015-04-16 11:39:05.009 13951 TRACE nova.openstack.common.threadgroup Stdout: u''
2015-04-16 11:39:05.009 13951 TRACE nova.openstack.common.threadgroup Stderr: u'blockdev: cannot open /dev/disk/by-path/ip-127.0.0.1:3260-iscsi-iqn.2010-10.org.openstack:volume-f1015aa4-1998-47c1-8ce6-625ca0fa2b8c-lun-1: No such device or address\n'
2015-04-16 11:39:05.009 13951 TRACE nova.openstack.common.threadgroup
However ....
devstack> ls -al /dev/disk/by-path/ip-10.42.0.70:3260-iscsi-iqn.2010-10.org.openstack:volume-38709eda-da3a-46fa-9607-d2992d7ed1fa-lun-1
lrwxrwxrwx 1 root root 9 Apr 16 13:29 /dev/disk/by-path/ip-10.42.0.70:3260-iscsi-iqn.2010-10.org.openstack:volume-38709eda-da3a-46fa-9607-d2992d7ed1fa-lun-1 -> ../../sdb
> ls -l /dev/sdb
brw-rw---- 1 libvirt-qemu kvm 8, 16 Apr 16 13:31 /dev/sdb
> sudo blockdev --info /dev/sdb
blockdev: cannot open /dev/sdb: No such device or address
To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1445021/+subscriptions
References