yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #14502
[Bug 956589] Re: Device is busy error on lxc instance shutdown
I got the same error with havana-stable nova while I booting a kvm instance with libvirt driver,
2014-05-20 14:44:13.179 24237 DEBUG nova.virt.disk.mount.api [req-63dccfa5-ed06-4daa-8759-ee22c1259edf 830d8718e9e4454a886dee12ce3e8b8e dfbb396e096e4f2f95f2d7b6a6713e8c] Umount /dev/mapper/nbd9p1 unmnt_dev /usr/lib/python2.7/dist-packages/nova/virt/disk/mount/api.py:208
2014-05-20 14:44:13.180 24237 DEBUG nova.openstack.common.processutils [req-63dccfa5-ed06-4daa-8759-ee22c1259edf 830d8718e9e4454a886dee12ce3e8b8e dfbb396e096e4f2f95f2d7b6a6713e8c] Running cmd (subprocess): sudo nova-rootwrap /etc/nova/rootwrap.conf umount /dev/mapper/nbd9p1 execute /usr/lib/python2.7/dist-packages/nova/openstack/common/processutils.py:147
2014-05-20 14:44:13.249 24237 DEBUG nova.virt.disk.mount.api [req-63dccfa5-ed06-4daa-8759-ee22c1259edf 830d8718e9e4454a886dee12ce3e8b8e dfbb396e096e4f2f95f2d7b6a6713e8c] Unmap dev /dev/nbd9 unmap_dev /usr/lib/python2.7/dist-packages/nova/virt/disk/mount/api.py:184
2014-05-20 14:44:13.250 24237 DEBUG nova.openstack.common.processutils [req-63dccfa5-ed06-4daa-8759-ee22c1259edf 830d8718e9e4454a886dee12ce3e8b8e dfbb396e096e4f2f95f2d7b6a6713e8c] Running cmd (subprocess): sudo nova-rootwrap /etc/nova/rootwrap.conf kpartx -d /dev/nbd9 execute /usr/lib/python2.7/dist-packages/nova/openstack/common/processutils.py:147
2014-05-20 14:44:13.324 24237 DEBUG nova.openstack.common.processutils [req-63dccfa5-ed06-4daa-8759-ee22c1259edf 830d8718e9e4454a886dee12ce3e8b8e dfbb396e096e4f2f95f2d7b6a6713e8c] Result was 1 execute /usr/lib/python2.7/dist-packages/nova/openstack/common/processutils.py:172
2014-05-20 14:44:13.325 24237 DEBUG nova.virt.disk.vfs.localfs [req-63dccfa5-ed06-4daa-8759-ee22c1259edf 830d8718e9e4454a886dee12ce3e8b8e dfbb396e096e4f2f95f2d7b6a6713e8c] Failed to unmount /tmp/openstack-vfs-localfsUz8SQH: Unexpected error while running command.
2014-05-20 14:44:13.326 24237 DEBUG nova.virt.disk.vfs.localfs [req-63dccfa5-ed06-4daa-8759-ee22c1259edf 830d8718e9e4454a886dee12ce3e8b8e dfbb396e096e4f2f95f2d7b6a6713e8c] Failed to remove /tmp/openstack-vfs-localfsUz8SQH: [Errno 16] Device or resource busy: '/tmp/openstack-vfs-localfsUz8SQH' teardown /usr/lib/python2.7/dist-packages/nova/virt/disk/vfs/localfs.py:98
and I did the file injection processes manually as below:
1) qemu-nbd -c /dev/nbd16 $INSTNCE_PATH/$UUID/disk
2) kpartx -a /dev/nbd16
3) mount /dev/mapper/nbd16p1 /tmp/openstack-localfsxxxxx
and then try to release the resources above(with no file operation in the /tmp/openstack-localfsxxxxx):
1) umount /dev/mapper/nbd16p1 (this operation return with no error/warnning output, I also can't see any files under /tmp/openstack/localfsxxxxx, I think it runs successfully)
2) kpartx -d /dev/nbd16 (this operation failed with output 'device-mapper: remove ioctl on nbd16p1 failed: Device or resource busy')
3) then I try this operation several times, after a few minutes(1~3 in my mind), it runs successfully.
I think this is the really issue we should fix here, we should add a 'attempts' parameter to run the 'kpartx -d' operation,
do the same change as unget_dev in loop.py:
def unget_dev(self):
if not self.linked:
return
# NOTE(mikal): On some kernels, losetup -d will intermittently fail,
# thus leaking a loop device unless the losetup --detach is retried:
# https://lkml.org/lkml/2012/9/28/62
LOG.debug(_("Release loop device %s"), self.device)
utils.execute('losetup', '--detach', self.device, run_as_root=True,
attempts=3)
self.linked = False
self.device = None
** Changed in: nova
Status: Invalid => Confirmed
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/956589
Title:
Device is busy error on lxc instance shutdown
Status in OpenStack Compute (Nova):
Confirmed
Bug description:
Sometimes, I have this error when I'm shutting down an instance:
2012-03-15 19:49:33 ERROR nova.virt.disk.api [-] Failed to remove container: Unexpected error while running command.
Command: sudo nova-rootwrap kpartx -d /dev/nbd9
Exit code: 1
Stdout: ''
Stderr: 'device-mapper: remove ioctl failed: Device or resource busy\n'
(nova.virt.disk.api): TRACE: Traceback (most recent call last):
(nova.virt.disk.api): TRACE: File "/usr/lib/python2.7/dist-packages/nova/virt/disk/api.py", line 288, in destroy_container
(nova.virt.disk.api): TRACE: img.umount()
(nova.virt.disk.api): TRACE: File "/usr/lib/python2.7/dist-packages/nova/virt/disk/api.py", line 216, in umount
(nova.virt.disk.api): TRACE: self._mounter.do_umount()
(nova.virt.disk.api): TRACE: File "/usr/lib/python2.7/dist-packages/nova/virt/disk/mount.py", line 125, in do_umount
(nova.virt.disk.api): TRACE: self.unmap_dev()
(nova.virt.disk.api): TRACE: File "/usr/lib/python2.7/dist-packages/nova/virt/disk/mount.py", line 92, in unmap_dev
(nova.virt.disk.api): TRACE: utils.execute('kpartx', '-d', self.device, run_as_root=True)
(nova.virt.disk.api): TRACE: File "/usr/lib/python2.7/dist-packages/nova/utils.py", line 240, in execute
(nova.virt.disk.api): TRACE: cmd=' '.join(cmd))
(nova.virt.disk.api): TRACE: ProcessExecutionError: Unexpected error while running command.
(nova.virt.disk.api): TRACE: Command: sudo nova-rootwrap kpartx -d /dev/nbd9
(nova.virt.disk.api): TRACE: Exit code: 1
(nova.virt.disk.api): TRACE: Stdout: ''
(nova.virt.disk.api): TRACE: Stderr: 'device-mapper: remove ioctl failed: Device or resource busy\n'
(nova.virt.disk.api): TRACE:
It happens when I have a daemon running inside the lxc instance.
I'm using the current trunk and this is my nova.conf file:
--auth_strategy=keystone
--sql_connection=postgresql://nova:password@localhost:5432/nova
--allow_admin_api
--allow_ec2_admin_api
--img_handlers=nbd
--dhcpbridge_flagfile=/etc/nova/nova.conf
--dhcpbridge=/usr/bin/nova-dhcpbridge
--logdir=/var/log/nova
--state_path=/var/lib/nova
--lock_path=/var/lock/nova
--force_dhcp_release
--iscsi_helper=tgtadm
--libvirt_use_virtio_for_bridges
--connection_type=libvirt
--root_helper=sudo nova-rootwrap
--verbose
To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/956589/+subscriptions