← Back to team overview

yahoo-eng-team team mailing list archive

[Bug 1372670] [NEW] libvirtError: operation failed: cannot read cputime for domain

 

Public bug reported:

2014-09-22 15:09:59.534 ERROR nova.compute.manager [req-74866bd8-5382-4354-89ca-7683a013d99c ServerDiskConfigTestJSON-218625483 ServerDiskConfigTestJSON-1404511620] [instance: c09099c1-5dde-4ba9-8a8e-94ff75309738] Setting instance vm_state to ERROR
2014-09-22 15:09:59.534 25333 TRACE nova.compute.manager [instance: c09099c1-5dde-4ba9-8a8e-94ff75309738] Traceback (most recent call last):
2014-09-22 15:09:59.534 25333 TRACE nova.compute.manager [instance: c09099c1-5dde-4ba9-8a8e-94ff75309738]   File "/opt/stack/new/nova/nova/compute/manager.py", line 6054, in _error_out_instance_on_exception
2014-09-22 15:09:59.534 25333 TRACE nova.compute.manager [instance: c09099c1-5dde-4ba9-8a8e-94ff75309738]     yield
2014-09-22 15:09:59.534 25333 TRACE nova.compute.manager [instance: c09099c1-5dde-4ba9-8a8e-94ff75309738]   File "/opt/stack/new/nova/nova/compute/manager.py", line 3740, in resize_instance
2014-09-22 15:09:59.534 25333 TRACE nova.compute.manager [instance: c09099c1-5dde-4ba9-8a8e-94ff75309738]     timeout, retry_interval)
2014-09-22 15:09:59.534 25333 TRACE nova.compute.manager [instance: c09099c1-5dde-4ba9-8a8e-94ff75309738]   File "/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 5838, in migrate_disk_and_power_off
2014-09-22 15:09:59.534 25333 TRACE nova.compute.manager [instance: c09099c1-5dde-4ba9-8a8e-94ff75309738]     self.power_off(instance, timeout, retry_interval)
2014-09-22 15:09:59.534 25333 TRACE nova.compute.manager [instance: c09099c1-5dde-4ba9-8a8e-94ff75309738]   File "/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 2474, in power_off
2014-09-22 15:09:59.534 25333 TRACE nova.compute.manager [instance: c09099c1-5dde-4ba9-8a8e-94ff75309738]     self._clean_shutdown(instance, timeout, retry_interval)
2014-09-22 15:09:59.534 25333 TRACE nova.compute.manager [instance: c09099c1-5dde-4ba9-8a8e-94ff75309738]   File "/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 2434, in _clean_shutdown
2014-09-22 15:09:59.534 25333 TRACE nova.compute.manager [instance: c09099c1-5dde-4ba9-8a8e-94ff75309738]     (state, _max_mem, _mem, _cpus, _t) = dom.info()
2014-09-22 15:09:59.534 25333 TRACE nova.compute.manager [instance: c09099c1-5dde-4ba9-8a8e-94ff75309738]   File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 183, in doit
2014-09-22 15:09:59.534 25333 TRACE nova.compute.manager [instance: c09099c1-5dde-4ba9-8a8e-94ff75309738]     result = proxy_call(self._autowrap, f, *args, **kwargs)
2014-09-22 15:09:59.534 25333 TRACE nova.compute.manager [instance: c09099c1-5dde-4ba9-8a8e-94ff75309738]   File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 141, in proxy_call
2014-09-22 15:09:59.534 25333 TRACE nova.compute.manager [instance: c09099c1-5dde-4ba9-8a8e-94ff75309738]     rv = execute(f, *args, **kwargs)
2014-09-22 15:09:59.534 25333 TRACE nova.compute.manager [instance: c09099c1-5dde-4ba9-8a8e-94ff75309738]   File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 122, in execute
2014-09-22 15:09:59.534 25333 TRACE nova.compute.manager [instance: c09099c1-5dde-4ba9-8a8e-94ff75309738]     six.reraise(c, e, tb)
2014-09-22 15:09:59.534 25333 TRACE nova.compute.manager [instance: c09099c1-5dde-4ba9-8a8e-94ff75309738]   File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 80, in tworker
2014-09-22 15:09:59.534 25333 TRACE nova.compute.manager [instance: c09099c1-5dde-4ba9-8a8e-94ff75309738]     rv = meth(*args, **kwargs)
2014-09-22 15:09:59.534 25333 TRACE nova.compute.manager [instance: c09099c1-5dde-4ba9-8a8e-94ff75309738]   File "/usr/lib/python2.7/dist-packages/libvirt.py", line 1068, in info
2014-09-22 15:09:59.534 25333 TRACE nova.compute.manager [instance: c09099c1-5dde-4ba9-8a8e-94ff75309738]     if ret is None: raise libvirtError ('virDomainGetInfo() failed', dom=self)
2014-09-22 15:09:59.534 25333 TRACE nova.compute.manager [instance: c09099c1-5dde-4ba9-8a8e-94ff75309738] libvirtError: operation failed: cannot read cputime for domain
2014-09-22 15:09:59.534 25333 TRACE nova.compute.manager [instance: c09099c1-5dde-4ba9-8a8e-94ff75309738] 


http://logs.openstack.org/71/123071/1/gate/gate-tempest-dsvm-postgres-full/7369ae8/logs/screen-n-cpu.txt.gz?level=TRACE#_2014-09-22_15_09_59_534

I am seeing this stacktrace in some nova-compute logs, it looks like
this is showing up in passing jobs, but I think that is because tempest
doesn't always fail if a 'nova delete' fails.

** Affects: nova
     Importance: Undecided
         Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1372670

Title:
  libvirtError: operation failed: cannot read cputime for domain

Status in OpenStack Compute (Nova):
  New

Bug description:
  2014-09-22 15:09:59.534 ERROR nova.compute.manager [req-74866bd8-5382-4354-89ca-7683a013d99c ServerDiskConfigTestJSON-218625483 ServerDiskConfigTestJSON-1404511620] [instance: c09099c1-5dde-4ba9-8a8e-94ff75309738] Setting instance vm_state to ERROR
  2014-09-22 15:09:59.534 25333 TRACE nova.compute.manager [instance: c09099c1-5dde-4ba9-8a8e-94ff75309738] Traceback (most recent call last):
  2014-09-22 15:09:59.534 25333 TRACE nova.compute.manager [instance: c09099c1-5dde-4ba9-8a8e-94ff75309738]   File "/opt/stack/new/nova/nova/compute/manager.py", line 6054, in _error_out_instance_on_exception
  2014-09-22 15:09:59.534 25333 TRACE nova.compute.manager [instance: c09099c1-5dde-4ba9-8a8e-94ff75309738]     yield
  2014-09-22 15:09:59.534 25333 TRACE nova.compute.manager [instance: c09099c1-5dde-4ba9-8a8e-94ff75309738]   File "/opt/stack/new/nova/nova/compute/manager.py", line 3740, in resize_instance
  2014-09-22 15:09:59.534 25333 TRACE nova.compute.manager [instance: c09099c1-5dde-4ba9-8a8e-94ff75309738]     timeout, retry_interval)
  2014-09-22 15:09:59.534 25333 TRACE nova.compute.manager [instance: c09099c1-5dde-4ba9-8a8e-94ff75309738]   File "/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 5838, in migrate_disk_and_power_off
  2014-09-22 15:09:59.534 25333 TRACE nova.compute.manager [instance: c09099c1-5dde-4ba9-8a8e-94ff75309738]     self.power_off(instance, timeout, retry_interval)
  2014-09-22 15:09:59.534 25333 TRACE nova.compute.manager [instance: c09099c1-5dde-4ba9-8a8e-94ff75309738]   File "/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 2474, in power_off
  2014-09-22 15:09:59.534 25333 TRACE nova.compute.manager [instance: c09099c1-5dde-4ba9-8a8e-94ff75309738]     self._clean_shutdown(instance, timeout, retry_interval)
  2014-09-22 15:09:59.534 25333 TRACE nova.compute.manager [instance: c09099c1-5dde-4ba9-8a8e-94ff75309738]   File "/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 2434, in _clean_shutdown
  2014-09-22 15:09:59.534 25333 TRACE nova.compute.manager [instance: c09099c1-5dde-4ba9-8a8e-94ff75309738]     (state, _max_mem, _mem, _cpus, _t) = dom.info()
  2014-09-22 15:09:59.534 25333 TRACE nova.compute.manager [instance: c09099c1-5dde-4ba9-8a8e-94ff75309738]   File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 183, in doit
  2014-09-22 15:09:59.534 25333 TRACE nova.compute.manager [instance: c09099c1-5dde-4ba9-8a8e-94ff75309738]     result = proxy_call(self._autowrap, f, *args, **kwargs)
  2014-09-22 15:09:59.534 25333 TRACE nova.compute.manager [instance: c09099c1-5dde-4ba9-8a8e-94ff75309738]   File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 141, in proxy_call
  2014-09-22 15:09:59.534 25333 TRACE nova.compute.manager [instance: c09099c1-5dde-4ba9-8a8e-94ff75309738]     rv = execute(f, *args, **kwargs)
  2014-09-22 15:09:59.534 25333 TRACE nova.compute.manager [instance: c09099c1-5dde-4ba9-8a8e-94ff75309738]   File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 122, in execute
  2014-09-22 15:09:59.534 25333 TRACE nova.compute.manager [instance: c09099c1-5dde-4ba9-8a8e-94ff75309738]     six.reraise(c, e, tb)
  2014-09-22 15:09:59.534 25333 TRACE nova.compute.manager [instance: c09099c1-5dde-4ba9-8a8e-94ff75309738]   File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 80, in tworker
  2014-09-22 15:09:59.534 25333 TRACE nova.compute.manager [instance: c09099c1-5dde-4ba9-8a8e-94ff75309738]     rv = meth(*args, **kwargs)
  2014-09-22 15:09:59.534 25333 TRACE nova.compute.manager [instance: c09099c1-5dde-4ba9-8a8e-94ff75309738]   File "/usr/lib/python2.7/dist-packages/libvirt.py", line 1068, in info
  2014-09-22 15:09:59.534 25333 TRACE nova.compute.manager [instance: c09099c1-5dde-4ba9-8a8e-94ff75309738]     if ret is None: raise libvirtError ('virDomainGetInfo() failed', dom=self)
  2014-09-22 15:09:59.534 25333 TRACE nova.compute.manager [instance: c09099c1-5dde-4ba9-8a8e-94ff75309738] libvirtError: operation failed: cannot read cputime for domain
  2014-09-22 15:09:59.534 25333 TRACE nova.compute.manager [instance: c09099c1-5dde-4ba9-8a8e-94ff75309738] 

  
  http://logs.openstack.org/71/123071/1/gate/gate-tempest-dsvm-postgres-full/7369ae8/logs/screen-n-cpu.txt.gz?level=TRACE#_2014-09-22_15_09_59_534

  I am seeing this stacktrace in some nova-compute logs, it looks like
  this is showing up in passing jobs, but I think that is because
  tempest doesn't always fail if a 'nova delete' fails.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1372670/+subscriptions


Follow ups

References