← Back to team overview

yahoo-eng-team team mailing list archive

[Bug 1484586] [NEW] file injection fails when using fallback method

 

Public bug reported:

Trying to perform file injection without libguestfs, i.e. fallback to
using nbd.

2015-08-13 13:21:21 43295 DEBUG oslo_concurrency.processutils [req-e70d20d6-f42c-4d3e-8598-4dd1a76e09f5 61e0c3d7a0204bdeb647498c47599bc0 c942c664c4024ce4b5fe2bf8c3a21a3c] Running cmd (subprocess): sudo nova-rootwrap /opt/stack/service/nova-compute/etc/nova/rootwrap.conf qemu-nbd -c /dev/nbd8 //var/lib/nova/instances/e8cb4369-adf8-4e97-ad75-9d181d3c9dac/disk execute /opt/stack/venv/nova-20150805T111039Z/lib/python2.7/site-packages/oslo_concurrency/processutils.py:199
2015-08-13 13:21:21 43295 DEBUG oslo_concurrency.processutils [req-e70d20d6-f42c-4d3e-8598-4dd1a76e09f5 61e0c3d7a0204bdeb647498c47599bc0 c942c664c4024ce4b5fe2bf8c3a21a3c] CMD "sudo nova-rootwrap /opt/stack/service/nova-compute/etc/nova/rootwrap.conf qemu-nbd -c /dev/nbd8 //var/lib/nova/instances/e8cb4369-adf8-4e97-ad75-9d181d3c9dac/disk" returned: 0 in 0.096s execute /opt/stack/venv/nova-20150805T111039Z/lib/python2.7/site-packages/oslo_concurrency/processutils.py:225
2015-08-13 13:21:21 43295 DEBUG oslo_concurrency.lockutils [req-e70d20d6-f42c-4d3e-8598-4dd1a76e09f5 61e0c3d7a0204bdeb647498c47599bc0 c942c664c4024ce4b5fe2bf8c3a21a3c] Lock "nbd-allocation-lock" released by "_inner_get_dev" :: held 0.099s inner /opt/stack/venv/nova-20150805T111039Z/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:456
2015-08-13 13:21:21 43295 DEBUG nova.virt.disk.mount.api [req-e70d20d6-f42c-4d3e-8598-4dd1a76e09f5 61e0c3d7a0204bdeb647498c47599bc0 c942c664c4024ce4b5fe2bf8c3a21a3c] Map dev /dev/nbd8 map_dev /opt/stack/venv/nova-20150805T111039Z/lib/python2.7/site-packages/nova/virt/disk/mount/api.py:140
2015-08-13 13:21:21 43295 DEBUG oslo_concurrency.processutils [req-e70d20d6-f42c-4d3e-8598-4dd1a76e09f5 61e0c3d7a0204bdeb647498c47599bc0 c942c664c4024ce4b5fe2bf8c3a21a3c] Running cmd (subprocess): sudo nova-rootwrap /opt/stack/service/nova-compute/etc/nova/rootwrap.conf kpartx -a /dev/nbd8 execute /opt/stack/venv/nova-20150805T111039Z/lib/python2.7/site-packages/oslo_concurrency/processutils.py:199
2015-08-13 13:21:21 43295 DEBUG oslo_concurrency.processutils [req-e70d20d6-f42c-4d3e-8598-4dd1a76e09f5 61e0c3d7a0204bdeb647498c47599bc0 c942c664c4024ce4b5fe2bf8c3a21a3c] CMD "sudo nova-rootwrap /opt/stack/service/nova-compute/etc/nova/rootwrap.conf kpartx -a /dev/nbd8" returned: 0 in 0.093s execute /opt/stack/venv/nova-20150805T111039Z/lib/python2.7/site-packages/oslo_concurrency/processutils.py:225

2015-08-13 13:21:21 43295 DEBUG nova.virt.disk.mount.api [req-e70d20d6
-f42c-4d3e-8598-4dd1a76e09f5 61e0c3d7a0204bdeb647498c47599bc0
c942c664c4024ce4b5fe2bf8c3a21a3c] Fail to mount, tearing back down
do_mount /opt/stack/venv/nova-20150805T111039Z/lib/python2.7/site-
packages/nova/virt/disk/mount/api.py:223

Although the kpartx command works the check for file path fails
generating an error.

Inserting a short sleep before checking for the path seems to work.
This issue is obviously timing related and I do not encounter this when
running devstack on a libvirt host.  However it occurs on some of the
baremetal hypervisors in our lab very reliably.

** Affects: nova
     Importance: Undecided
         Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1484586

Title:
  file injection fails when using fallback method

Status in OpenStack Compute (nova):
  New

Bug description:
  Trying to perform file injection without libguestfs, i.e. fallback to
  using nbd.

  2015-08-13 13:21:21 43295 DEBUG oslo_concurrency.processutils [req-e70d20d6-f42c-4d3e-8598-4dd1a76e09f5 61e0c3d7a0204bdeb647498c47599bc0 c942c664c4024ce4b5fe2bf8c3a21a3c] Running cmd (subprocess): sudo nova-rootwrap /opt/stack/service/nova-compute/etc/nova/rootwrap.conf qemu-nbd -c /dev/nbd8 //var/lib/nova/instances/e8cb4369-adf8-4e97-ad75-9d181d3c9dac/disk execute /opt/stack/venv/nova-20150805T111039Z/lib/python2.7/site-packages/oslo_concurrency/processutils.py:199
  2015-08-13 13:21:21 43295 DEBUG oslo_concurrency.processutils [req-e70d20d6-f42c-4d3e-8598-4dd1a76e09f5 61e0c3d7a0204bdeb647498c47599bc0 c942c664c4024ce4b5fe2bf8c3a21a3c] CMD "sudo nova-rootwrap /opt/stack/service/nova-compute/etc/nova/rootwrap.conf qemu-nbd -c /dev/nbd8 //var/lib/nova/instances/e8cb4369-adf8-4e97-ad75-9d181d3c9dac/disk" returned: 0 in 0.096s execute /opt/stack/venv/nova-20150805T111039Z/lib/python2.7/site-packages/oslo_concurrency/processutils.py:225
  2015-08-13 13:21:21 43295 DEBUG oslo_concurrency.lockutils [req-e70d20d6-f42c-4d3e-8598-4dd1a76e09f5 61e0c3d7a0204bdeb647498c47599bc0 c942c664c4024ce4b5fe2bf8c3a21a3c] Lock "nbd-allocation-lock" released by "_inner_get_dev" :: held 0.099s inner /opt/stack/venv/nova-20150805T111039Z/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:456
  2015-08-13 13:21:21 43295 DEBUG nova.virt.disk.mount.api [req-e70d20d6-f42c-4d3e-8598-4dd1a76e09f5 61e0c3d7a0204bdeb647498c47599bc0 c942c664c4024ce4b5fe2bf8c3a21a3c] Map dev /dev/nbd8 map_dev /opt/stack/venv/nova-20150805T111039Z/lib/python2.7/site-packages/nova/virt/disk/mount/api.py:140
  2015-08-13 13:21:21 43295 DEBUG oslo_concurrency.processutils [req-e70d20d6-f42c-4d3e-8598-4dd1a76e09f5 61e0c3d7a0204bdeb647498c47599bc0 c942c664c4024ce4b5fe2bf8c3a21a3c] Running cmd (subprocess): sudo nova-rootwrap /opt/stack/service/nova-compute/etc/nova/rootwrap.conf kpartx -a /dev/nbd8 execute /opt/stack/venv/nova-20150805T111039Z/lib/python2.7/site-packages/oslo_concurrency/processutils.py:199
  2015-08-13 13:21:21 43295 DEBUG oslo_concurrency.processutils [req-e70d20d6-f42c-4d3e-8598-4dd1a76e09f5 61e0c3d7a0204bdeb647498c47599bc0 c942c664c4024ce4b5fe2bf8c3a21a3c] CMD "sudo nova-rootwrap /opt/stack/service/nova-compute/etc/nova/rootwrap.conf kpartx -a /dev/nbd8" returned: 0 in 0.093s execute /opt/stack/venv/nova-20150805T111039Z/lib/python2.7/site-packages/oslo_concurrency/processutils.py:225

  2015-08-13 13:21:21 43295 DEBUG nova.virt.disk.mount.api [req-e70d20d6
  -f42c-4d3e-8598-4dd1a76e09f5 61e0c3d7a0204bdeb647498c47599bc0
  c942c664c4024ce4b5fe2bf8c3a21a3c] Fail to mount, tearing back down
  do_mount /opt/stack/venv/nova-20150805T111039Z/lib/python2.7/site-
  packages/nova/virt/disk/mount/api.py:223

  Although the kpartx command works the check for file path fails
  generating an error.

  Inserting a short sleep before checking for the path seems to work.
  This issue is obviously timing related and I do not encounter this
  when running devstack on a libvirt host.  However it occurs on some of
  the baremetal hypervisors in our lab very reliably.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1484586/+subscriptions


Follow ups