← Back to team overview

yahoo-eng-team team mailing list archive

[Bug 1375108] [NEW] Failed to reboot instance successfully with EC2

 

Public bug reported:


Failure happens in tempest.thirdparty.boto.test_ec2_instance_run.InstanceRunTest.test_run_reboot_terminate_instance.

pythonlogging:'': {{{
2014-09-28 09:05:33,105 31828 INFO     [tempest.thirdparty.boto.utils.wait] State transition "pending" ==> "running" 3 second
2014-09-28 09:05:33,256 31828 DEBUG    [tempest.thirdparty.boto.test_ec2_instance_run] Instance rebooted - state: running
2014-09-28 09:05:35,003 31828 INFO     [tempest.thirdparty.boto.utils.wait] State transition "running" ==> "error" 1 second
}}}


http://logs.openstack.org/14/124014/3/check/check-tempest-dsvm-postgres-full/96934ea/logs/testr_results.html.gz


CPU log - 

2014-09-28 09:05:34.741 ERROR nova.compute.manager [req-c3f47db4-2474-43c5-bc8e-178608366f8f InstanceRunTest-2056140302 InstanceRunTest-1314839858] [instance: b28b2844-26a9-46ff-bcde-023a7604a06e] Cannot reboot instance: [Errno 2] No such file or directory: '/opt/stack/data/nova/instances/b28b2844-26a9-46ff-bcde-023a7604a06e/kernel.part'
2014-09-28 09:05:34.935 DEBUG nova.openstack.common.lockutils [req-c3f47db4-2474-43c5-bc8e-178608366f8f InstanceRunTest-2056140302 InstanceRunTest-1314839858] Created new semaphore "compute_resources" internal_lock /opt/stack/new/nova/nova/openstack/common/lockutils.py:206
2014-09-28 09:05:34.935 DEBUG nova.openstack.common.lockutils [req-c3f47db4-2474-43c5-bc8e-178608366f8f InstanceRunTest-2056140302 InstanceRunTest-1314839858] Acquired semaphore "compute_resources" lock /opt/stack/new/nova/nova/openstack/common/lockutils.py:229
2014-09-28 09:05:34.935 DEBUG nova.openstack.common.lockutils [req-c3f47db4-2474-43c5-bc8e-178608366f8f InstanceRunTest-2056140302 InstanceRunTest-1314839858] Got semaphore / lock "update_usage" inner /opt/stack/new/nova/nova/openstack/common/lockutils.py:271
2014-09-28 09:05:34.970 INFO nova.scheduler.client.report [req-c3f47db4-2474-43c5-bc8e-178608366f8f InstanceRunTest-2056140302 InstanceRunTest-1314839858] Compute_service record updated for ('devstack-trusty-rax-dfw-2448356.slave.openstack.org', 'devstack-trusty-rax-dfw-2448356.slave.openstack.org')
2014-09-28 09:05:34.971 DEBUG nova.openstack.common.lockutils [req-c3f47db4-2474-43c5-bc8e-178608366f8f InstanceRunTest-2056140302 InstanceRunTest-1314839858] Releasing semaphore "compute_resources" lock /opt/stack/new/nova/nova/openstack/common/lockutils.py:238
2014-09-28 09:05:34.971 DEBUG nova.openstack.common.lockutils [req-c3f47db4-2474-43c5-bc8e-178608366f8f InstanceRunTest-2056140302 InstanceRunTest-1314839858] Semaphore / lock released "update_usage" inner /opt/stack/new/nova/nova/openstack/common/lockutils.py:275
2014-09-28 09:05:34.980 ERROR oslo.messaging.rpc.dispatcher [req-c3f47db4-2474-43c5-bc8e-178608366f8f InstanceRunTest-2056140302 InstanceRunTest-1314839858] Exception during message handling: [Errno 2] No such file or directory: '/opt/stack/data/nova/instances/b28b2844-26a9-46ff-bcde-023a7604a06e/kernel.part'
2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher Traceback (most recent call last):
2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher   File "/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 134, in _dispatch_and_reply
2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher     incoming.message))
2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher   File "/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 177, in _dispatch
2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher     return self._do_dispatch(endpoint, method, ctxt, args)
2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher   File "/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 123, in _do_dispatch
2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher     result = getattr(endpoint, method)(ctxt, **new_args)
2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/new/nova/nova/exception.py", line 88, in wrapped
2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher     payload)
2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/new/nova/nova/openstack/common/excutils.py", line 82, in __exit__
2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher     six.reraise(self.type_, self.value, self.tb)
2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/new/nova/nova/exception.py", line 71, in wrapped
2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher     return f(self, context, *args, **kw)
2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/new/nova/nova/compute/manager.py", line 298, in decorated_function
2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher     pass
2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/new/nova/nova/openstack/common/excutils.py", line 82, in __exit__
2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher     six.reraise(self.type_, self.value, self.tb)
2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/new/nova/nova/compute/manager.py", line 284, in decorated_function
2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher     return function(self, context, *args, **kwargs)
2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/new/nova/nova/compute/manager.py", line 348, in decorated_function
2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher     return function(self, context, *args, **kwargs)
2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/new/nova/nova/compute/manager.py", line 326, in decorated_function
2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher     kwargs['instance'], e, sys.exc_info())
2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/new/nova/nova/openstack/common/excutils.py", line 82, in __exit__
2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher     six.reraise(self.type_, self.value, self.tb)
2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/new/nova/nova/compute/manager.py", line 314, in decorated_function
2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher     return function(self, context, *args, **kwargs)
2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/new/nova/nova/compute/manager.py", line 2941, in reboot_instance
2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher     self._set_instance_obj_error_state(context, instance)
2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/new/nova/nova/openstack/common/excutils.py", line 82, in __exit__
2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher     six.reraise(self.type_, self.value, self.tb)
2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/new/nova/nova/compute/manager.py", line 2922, in reboot_instance
2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher     bad_volumes_callback=bad_volumes_callback)
2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 2269, in reboot
2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher     block_device_info)
2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 2366, in _hard_reboot
2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher     disk_info_json)
2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 5559, in _create_images_and_backing
2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher     self._fetch_instance_kernel_ramdisk(context, instance)
2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 5393, in _fetch_instance_kernel_ramdisk
2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher     instance['project_id'])
2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/new/nova/nova/virt/libvirt/utils.py", line 452, in fetch_image
2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher     max_size=max_size)
2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/new/nova/nova/virt/images.py", line 79, in fetch_to_raw
2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher     max_size=max_size)
2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/new/nova/nova/virt/images.py", line 73, in fetch
2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher     IMAGE_API.download(context, image_href, dest_path=path)
2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/new/nova/nova/image/api.py", line 178, in download
2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher     dst_path=dest_path)
2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/new/nova/nova/image/glance.py", line 363, in download
2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher     data = open(dst_path, 'wb')
2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher IOError: [Errno 2] No such file or directory: '/opt/stack/data/nova/instances/b28b2844-26a9-46ff-bcde-023a7604a06e/kernel.part'
2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher 

>From logs it looks like instance reboot is failed. But as state shown in
above logs instance state changed to 'running' even reboot failed.

https://bugs.launchpad.net/nova/+bug/1188343 fixed issue of changing the
instance state to active (running) even before reboot operation success.
Is it still issue?

** Affects: nova
     Importance: Undecided
         Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1375108

Title:
  Failed to reboot instance successfully with EC2

Status in OpenStack Compute (Nova):
  New

Bug description:
  
  Failure happens in tempest.thirdparty.boto.test_ec2_instance_run.InstanceRunTest.test_run_reboot_terminate_instance.

  pythonlogging:'': {{{
  2014-09-28 09:05:33,105 31828 INFO     [tempest.thirdparty.boto.utils.wait] State transition "pending" ==> "running" 3 second
  2014-09-28 09:05:33,256 31828 DEBUG    [tempest.thirdparty.boto.test_ec2_instance_run] Instance rebooted - state: running
  2014-09-28 09:05:35,003 31828 INFO     [tempest.thirdparty.boto.utils.wait] State transition "running" ==> "error" 1 second
  }}}

  
  http://logs.openstack.org/14/124014/3/check/check-tempest-dsvm-postgres-full/96934ea/logs/testr_results.html.gz

  
  CPU log - 

  2014-09-28 09:05:34.741 ERROR nova.compute.manager [req-c3f47db4-2474-43c5-bc8e-178608366f8f InstanceRunTest-2056140302 InstanceRunTest-1314839858] [instance: b28b2844-26a9-46ff-bcde-023a7604a06e] Cannot reboot instance: [Errno 2] No such file or directory: '/opt/stack/data/nova/instances/b28b2844-26a9-46ff-bcde-023a7604a06e/kernel.part'
  2014-09-28 09:05:34.935 DEBUG nova.openstack.common.lockutils [req-c3f47db4-2474-43c5-bc8e-178608366f8f InstanceRunTest-2056140302 InstanceRunTest-1314839858] Created new semaphore "compute_resources" internal_lock /opt/stack/new/nova/nova/openstack/common/lockutils.py:206
  2014-09-28 09:05:34.935 DEBUG nova.openstack.common.lockutils [req-c3f47db4-2474-43c5-bc8e-178608366f8f InstanceRunTest-2056140302 InstanceRunTest-1314839858] Acquired semaphore "compute_resources" lock /opt/stack/new/nova/nova/openstack/common/lockutils.py:229
  2014-09-28 09:05:34.935 DEBUG nova.openstack.common.lockutils [req-c3f47db4-2474-43c5-bc8e-178608366f8f InstanceRunTest-2056140302 InstanceRunTest-1314839858] Got semaphore / lock "update_usage" inner /opt/stack/new/nova/nova/openstack/common/lockutils.py:271
  2014-09-28 09:05:34.970 INFO nova.scheduler.client.report [req-c3f47db4-2474-43c5-bc8e-178608366f8f InstanceRunTest-2056140302 InstanceRunTest-1314839858] Compute_service record updated for ('devstack-trusty-rax-dfw-2448356.slave.openstack.org', 'devstack-trusty-rax-dfw-2448356.slave.openstack.org')
  2014-09-28 09:05:34.971 DEBUG nova.openstack.common.lockutils [req-c3f47db4-2474-43c5-bc8e-178608366f8f InstanceRunTest-2056140302 InstanceRunTest-1314839858] Releasing semaphore "compute_resources" lock /opt/stack/new/nova/nova/openstack/common/lockutils.py:238
  2014-09-28 09:05:34.971 DEBUG nova.openstack.common.lockutils [req-c3f47db4-2474-43c5-bc8e-178608366f8f InstanceRunTest-2056140302 InstanceRunTest-1314839858] Semaphore / lock released "update_usage" inner /opt/stack/new/nova/nova/openstack/common/lockutils.py:275
  2014-09-28 09:05:34.980 ERROR oslo.messaging.rpc.dispatcher [req-c3f47db4-2474-43c5-bc8e-178608366f8f InstanceRunTest-2056140302 InstanceRunTest-1314839858] Exception during message handling: [Errno 2] No such file or directory: '/opt/stack/data/nova/instances/b28b2844-26a9-46ff-bcde-023a7604a06e/kernel.part'
  2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher Traceback (most recent call last):
  2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher   File "/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 134, in _dispatch_and_reply
  2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher     incoming.message))
  2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher   File "/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 177, in _dispatch
  2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher     return self._do_dispatch(endpoint, method, ctxt, args)
  2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher   File "/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 123, in _do_dispatch
  2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher     result = getattr(endpoint, method)(ctxt, **new_args)
  2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/new/nova/nova/exception.py", line 88, in wrapped
  2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher     payload)
  2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/new/nova/nova/openstack/common/excutils.py", line 82, in __exit__
  2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher     six.reraise(self.type_, self.value, self.tb)
  2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/new/nova/nova/exception.py", line 71, in wrapped
  2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher     return f(self, context, *args, **kw)
  2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/new/nova/nova/compute/manager.py", line 298, in decorated_function
  2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher     pass
  2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/new/nova/nova/openstack/common/excutils.py", line 82, in __exit__
  2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher     six.reraise(self.type_, self.value, self.tb)
  2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/new/nova/nova/compute/manager.py", line 284, in decorated_function
  2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher     return function(self, context, *args, **kwargs)
  2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/new/nova/nova/compute/manager.py", line 348, in decorated_function
  2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher     return function(self, context, *args, **kwargs)
  2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/new/nova/nova/compute/manager.py", line 326, in decorated_function
  2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher     kwargs['instance'], e, sys.exc_info())
  2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/new/nova/nova/openstack/common/excutils.py", line 82, in __exit__
  2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher     six.reraise(self.type_, self.value, self.tb)
  2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/new/nova/nova/compute/manager.py", line 314, in decorated_function
  2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher     return function(self, context, *args, **kwargs)
  2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/new/nova/nova/compute/manager.py", line 2941, in reboot_instance
  2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher     self._set_instance_obj_error_state(context, instance)
  2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/new/nova/nova/openstack/common/excutils.py", line 82, in __exit__
  2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher     six.reraise(self.type_, self.value, self.tb)
  2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/new/nova/nova/compute/manager.py", line 2922, in reboot_instance
  2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher     bad_volumes_callback=bad_volumes_callback)
  2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 2269, in reboot
  2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher     block_device_info)
  2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 2366, in _hard_reboot
  2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher     disk_info_json)
  2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 5559, in _create_images_and_backing
  2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher     self._fetch_instance_kernel_ramdisk(context, instance)
  2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 5393, in _fetch_instance_kernel_ramdisk
  2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher     instance['project_id'])
  2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/new/nova/nova/virt/libvirt/utils.py", line 452, in fetch_image
  2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher     max_size=max_size)
  2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/new/nova/nova/virt/images.py", line 79, in fetch_to_raw
  2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher     max_size=max_size)
  2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/new/nova/nova/virt/images.py", line 73, in fetch
  2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher     IMAGE_API.download(context, image_href, dest_path=path)
  2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/new/nova/nova/image/api.py", line 178, in download
  2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher     dst_path=dest_path)
  2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/new/nova/nova/image/glance.py", line 363, in download
  2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher     data = open(dst_path, 'wb')
  2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher IOError: [Errno 2] No such file or directory: '/opt/stack/data/nova/instances/b28b2844-26a9-46ff-bcde-023a7604a06e/kernel.part'
  2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher 

  From logs it looks like instance reboot is failed. But as state shown
  in above logs instance state changed to 'running' even reboot failed.

  https://bugs.launchpad.net/nova/+bug/1188343 fixed issue of changing
  the  instance state to active (running) even before reboot operation
  success. Is it still issue?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1375108/+subscriptions


Follow ups

References