← Back to team overview

yahoo-eng-team team mailing list archive

[Bug 1098458] Re: lxc instance deletion error

 

** Changed in: nova
       Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1098458

Title:
  lxc instance deletion error

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  i installed openstack folsom via ubuntu cloud source,below is the
  source address

  cat /etc/apt/source.list
  ......
  deb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-proposed/folsom main
  deb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-updates/folsom main
  ......

  now  i boot 3 instances(i01, i 02, i03),the i01 use the /dev/nbd15,
  and when i delete i01,the log file "nova-compute.log" and syslog will
  report error:

  nova-compute.log
  #######################################################################################
  2013-01-11 15:05:40 INFO nova.virt.libvirt.driver [-] [instance: cade906c-dc8c-4a7b-870d-a0b224fd802a] Instance destroyed successfully.
  2013-01-11 15:05:40 DEBUG nova.virt.libvirt.driver [req-365bfafb-2758-4f63-a714-4aa1af37d828 d19e6e6f9f7d4d13883870afbb6cbb67 8281ba5ad0a0458d9bf0bf93505267b3] [instance: cade906c-dc8c-4a7b-870d-a0b224fd802a] Error from libvirt during undefineFlags. Retrying with undefine from (pid=2721) _cleanup /usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py:510
  2013-01-11 15:05:40 DEBUG nova.utils [req-365bfafb-2758-4f63-a714-4aa1af37d828 d19e6e6f9f7d4d13883870afbb6cbb67 8281ba5ad0a0458d9bf0bf93505267b3] Got semaphore "iptables" for method "_apply"... from (pid=2721) inner /usr/lib/python2.7/dist-packages/nova/utils.py:713
  2013-01-11 15:05:40 DEBUG nova.utils [req-365bfafb-2758-4f63-a714-4aa1af37d828 d19e6e6f9f7d4d13883870afbb6cbb67 8281ba5ad0a0458d9bf0bf93505267b3] Attempting to grab file lock "iptables" for method "_apply"... from (pid=2721) inner /usr/lib/python2.7/dist-packages/nova/utils.py:717
  2013-01-11 15:05:40 DEBUG nova.utils [req-365bfafb-2758-4f63-a714-4aa1af37d828 d19e6e6f9f7d4d13883870afbb6cbb67 8281ba5ad0a0458d9bf0bf93505267b3] Got file lock "iptables" for method "_apply"... from (pid=2721) inner /usr/lib/python2.7/dist-packages/nova/utils.py:743
  2013-01-11 15:05:40 DEBUG nova.utils [req-365bfafb-2758-4f63-a714-4aa1af37d828 d19e6e6f9f7d4d13883870afbb6cbb67 8281ba5ad0a0458d9bf0bf93505267b3] Running cmd (subprocess): sudo nova-rootwrap /etc/nova/rootwrap.conf iptables-save -c -t filter from (pid=2721) execute /usr/lib/python2.7/dist-packages/nova/utils.py:176
  2013-01-11 15:05:40 DEBUG nova.utils [req-365bfafb-2758-4f63-a714-4aa1af37d828 d19e6e6f9f7d4d13883870afbb6cbb67 8281ba5ad0a0458d9bf0bf93505267b3] Result was 0 from (pid=2721) execute /usr/lib/python2.7/dist-packages/nova/utils.py:191
  2013-01-11 15:05:40 DEBUG nova.utils [req-365bfafb-2758-4f63-a714-4aa1af37d828 d19e6e6f9f7d4d13883870afbb6cbb67 8281ba5ad0a0458d9bf0bf93505267b3] Running cmd (subprocess): sudo nova-rootwrap /etc/nova/rootwrap.conf iptables-restore -c from (pid=2721) execute /usr/lib/python2.7/dist-packages/nova/utils.py:176
  2013-01-11 15:05:40 DEBUG nova.utils [req-365bfafb-2758-4f63-a714-4aa1af37d828 d19e6e6f9f7d4d13883870afbb6cbb67 8281ba5ad0a0458d9bf0bf93505267b3] Result was 0 from (pid=2721) execute /usr/lib/python2.7/dist-packages/nova/utils.py:191
  2013-01-11 15:05:40 DEBUG nova.utils [req-365bfafb-2758-4f63-a714-4aa1af37d828 d19e6e6f9f7d4d13883870afbb6cbb67 8281ba5ad0a0458d9bf0bf93505267b3] Running cmd (subprocess): sudo nova-rootwrap /etc/nova/rootwrap.conf iptables-save -c -t nat from (pid=2721) execute /usr/lib/python2.7/dist-packages/nova/utils.py:176
  2013-01-11 15:05:40 DEBUG nova.utils [req-365bfafb-2758-4f63-a714-4aa1af37d828 d19e6e6f9f7d4d13883870afbb6cbb67 8281ba5ad0a0458d9bf0bf93505267b3] Result was 0 from (pid=2721) execute /usr/lib/python2.7/dist-packages/nova/utils.py:191
  2013-01-11 15:05:40 DEBUG nova.utils [req-365bfafb-2758-4f63-a714-4aa1af37d828 d19e6e6f9f7d4d13883870afbb6cbb67 8281ba5ad0a0458d9bf0bf93505267b3] Running cmd (subprocess): sudo nova-rootwrap /etc/nova/rootwrap.conf iptables-restore -c from (pid=2721) execute /usr/lib/python2.7/dist-packages/nova/utils.py:176
  2013-01-11 15:05:40 DEBUG nova.utils [req-365bfafb-2758-4f63-a714-4aa1af37d828 d19e6e6f9f7d4d13883870afbb6cbb67 8281ba5ad0a0458d9bf0bf93505267b3] Result was 0 from (pid=2721) execute /usr/lib/python2.7/dist-packages/nova/utils.py:191
  2013-01-11 15:05:40 DEBUG nova.network.linux_net [req-365bfafb-2758-4f63-a714-4aa1af37d828 d19e6e6f9f7d4d13883870afbb6cbb67 8281ba5ad0a0458d9bf0bf93505267b3] IPTablesManager.apply completed with success from (pid=2721) _apply /usr/lib/python2.7/dist-packages/nova/network/linux_net.py:372
  2013-01-11 15:05:41 INFO nova.virt.libvirt.driver [req-365bfafb-2758-4f63-a714-4aa1af37d828 d19e6e6f9f7d4d13883870afbb6cbb67 8281ba5ad0a0458d9bf0bf93505267b3] [instance: cade906c-dc8c-4a7b-870d-a0b224fd802a] Deleting instance files /var/lib/nova/instances/instance-000003dc
  2013-01-11 15:05:41 DEBUG nova.utils [req-365bfafb-2758-4f63-a714-4aa1af37d828 d19e6e6f9f7d4d13883870afbb6cbb67 8281ba5ad0a0458d9bf0bf93505267b3] Running cmd (subprocess): sudo nova-rootwrap /etc/nova/rootwrap.conf umount /dev/nbd15 from (pid=2721) execute /usr/lib/python2.7/dist-packages/nova/utils.py:176
  2013-01-11 15:05:41 DEBUG nova.utils [req-365bfafb-2758-4f63-a714-4aa1af37d828 d19e6e6f9f7d4d13883870afbb6cbb67 8281ba5ad0a0458d9bf0bf93505267b3] Result was 0 from (pid=2721) execute /usr/lib/python2.7/dist-packages/nova/utils.py:191
  2013-01-11 15:05:41 DEBUG nova.utils [req-365bfafb-2758-4f63-a714-4aa1af37d828 d19e6e6f9f7d4d13883870afbb6cbb67 8281ba5ad0a0458d9bf0bf93505267b3] Running cmd (subprocess): sudo nova-rootwrap /etc/nova/rootwrap.conf qemu-nbd -d /dev/nbd15 from (pid=2721) execute /usr/lib/python2.7/dist-packages/nova/utils.py:176
  2013-01-11 15:05:41 DEBUG nova.utils [req-365bfafb-2758-4f63-a714-4aa1af37d828 d19e6e6f9f7d4d13883870afbb6cbb67 8281ba5ad0a0458d9bf0bf93505267b3] Result was 0 from (pid=2721) execute /usr/lib/python2.7/dist-packages/nova/utils.py:191
  2013-01-11 15:05:41 ERROR nova.virt.libvirt.driver [req-365bfafb-2758-4f63-a714-4aa1af37d828 d19e6e6f9f7d4d13883870afbb6cbb67 8281ba5ad0a0458d9bf0bf93505267b3] Failed to cleanup directory /var/lib/nova/instances/instance-000003dc: [Errno 16] Device or resource busy: '/var/lib/nova/instances/instance-000003dc/rootfs'
  2013-01-11 15:05:41 DEBUG nova.utils [req-365bfafb-2758-4f63-a714-4aa1af37d828 d19e6e6f9f7d4d13883870afbb6cbb67 8281ba5ad0a0458d9bf0bf93505267b3] Got semaphore "compute_resources" for method "update_usage"... from (pid=2721) inner /usr/lib/python2.7/dist-packages/nova/utils.py:713
  #######################################################################################

  syslog
  #######################################################################################
  Jan 11 16:19:49 superstack kernel: [ 2645.636002] end_request: I/O error, dev nbd14, sector 1314448
  Jan 11 16:19:49 superstack kernel: [ 2645.638289] Buffer I/O error on device nbd14, logical block 164306
  Jan 11 16:19:49 superstack kernel: [ 2645.640835] EXT4-fs warning (device nbd14): ext4_end_bio:251: I/O error writing to inode 30906 (offset 81920 size 4096 starting block 164307)
  Jan 11 16:19:49 superstack kernel: [ 2645.644283] end_request: I/O error, dev nbd14, sector 1053904
  Jan 11 16:19:49 superstack kernel: [ 2645.647358] Aborting journal on device nbd14-8.
  Jan 11 16:19:49 superstack kernel: [ 2645.661814] JBD2: I/O error detected when updating journal superblock for nbd14-8.
  Jan 11 16:19:49 superstack kernel: [ 2645.670324] journal commit I/O error
  Jan 11 16:19:49 superstack kernel: 065643]junlcmi / ro
  Jan 11 16:19:49 superstack kernel: [ 2645.704395] journal commit I/O error
  Jan 11 16:19:49 superstack kernel: [ 2645.727581] journal commit I/O error
  Jan 11 16:19:49 superstack kernel: [ 2645.731948] journal commit I/O error
  Jan 11 16:19:49 superstack kernel: [ 2645.736537] journal commit I/O error
  Jan 11 16:19:49 superstack kernel: [ 2645.749808] journal commit I/O error
  Jan 11 16:19:49 superstack kernel: [ 2645.766891] journal commit I/O error
  Jan 11 16:19:49 superstack kernel: [ 2645.786215] journal commit I/O error
  Jan 11 16:19:49 superstack kernel: [ 2645.793269] journal commit I/O error
  Jan 11 16:19:49 superstack kernel: [ 2645.799673] journal commit I/O error
  Jan 11 16:19:49 superstack kernel: [ 2645.800918] journal commit I/O error
  Jan 11 16:19:49 superstack kernel: [ 2645.802294] journal commit I/O error
  Jan 11 16:19:49 superstack kernel: [ 2645.808959] journal commit I/O error
  Jan 11 16:19:49 superstack kernel: [ 2645.810195] journal commit I/O error
  Jan 11 16:19:49 superstack kernel: [ 2645.815396] journal commit I/O error
  Jan 11 16:19:49 superstack kernel: [ 2645.819412] journal commit I/O error
  Jan 11 16:19:49 superstack kernel: [ 2645.820643] journal commit I/O error
  Jan 11 16:19:49 superstack kernel: [ 2645.821941] journal commit I/O error
  Jan 11 16:19:49 superstack kernel: [ 2645.843460] journal commit I/O error
  Jan 11 16:19:49 superstack kernel: [ 2645.852605] journal commit I/O error
  Jan 11 16:19:49 superstack kernel: [ 2645.853936] journal commit I/O error
  Jan 11 16:19:49 superstack kernel: [ 2645.863322] journal commit I/O error
  Jan 11 16:19:49 superstack kernel: [ 2645.864574] journal commit I/O error
  Jan 11 16:19:49 superstack kernel: [ 2645.865813] journal commit I/O error
  Jan 11 16:19:49 superstack kernel: [ 2645.875391] journal commit I/O error
  Jan 11 16:19:49 superstack kernel: [ 2645.876637] journal commit I/O error
  Jan 11 16:19:49 superstack kernel: [ 2645.877880] journal commit I/O error
  Jan 11 16:19:49 superstack kernel: [ 2645.887446] journal commit I/O error
  Jan 11 16:19:49 superstack kernel: [ 2645.888878] journal commit I/O error
  Jan 11 16:19:49 superstack kernel: [ 2645.890178] journal commit I/O error
  #######################################################################################

  as we can see the error "ERROR nova.virt.libvirt.driver [req-
  365bfafb-2758-4f63-a714-4aa1af37d828 d19e6e6f9f7d4d13883870afbb6cbb67
  8281ba5ad0a0458d9bf0bf93505267b3] Failed to cleanup directory
  /var/lib/nova/instances/instance-000003dc: [Errno 16] Device or
  resource busy: '/var/lib/nova/instances/instance-000003dc/rootfs'" in
  the nova-compute.log and " journal commit I/O error" in the syslog

  OK, now we boot another instance(i04), the booting process will pause
  at "readlink -nm /tmp/openstack-disk-mount-tmpwdoX2V/root/.ssh"(you
  can see it from nova-compute.log) and now the readlink will cause cpu
  100% usage.the i04 will be always be BUILDING stat and i have to hard
  reboot the host

  it will be OK, if i delete i03 firstly, then i02, i04 lastly

  you can test this bug by the belowing scripts
  ######################################################################################
  i01=`nova boot --flavor 1 --image  e7e31cbd-14d3-4ac9-a9fe-fc6f49e33655 --key_name key01 i01`
  sleep 20
  i02=`nova boot --flavor 1 --image  e7e31cbd-14d3-4ac9-a9fe-fc6f49e33655 --key_name key01 i02`
  sleep 20
  i03=`nova boot --flavor 1 --image  e7e31cbd-14d3-4ac9-a9fe-fc6f49e33655 --key_name key01 i03`
  sleep 20
  nova delete i01
  sleep 20
  i04=`nova boot --flavor 1 --image  e7e31cbd-14d3-4ac9-a9fe-fc6f49e33655 --key_name key01 i04`
  sleep 20
  nova list

  #####################################################################################

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1098458/+subscriptions