yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #29643
[Bug 1433378] [NEW] Problem deleting volume devices that attached to other VMs, when the users detach volumes.
Public bug reported:
When I detached volume from my virtual-machine, "nova-compute process" deleted volume devices that attached to other virtual-machine.
As you know, when detaching volumes from virtual-machine, nova-compute is going to delete volume devices.
However nova-compute deleted volume devices that attached to other virtual-machine.
So, file-system of those volumes had changed "read-only".
My Environment is following:
- OpenStack Juno 2014.2.1
- Hypervisor : KVM on Ubuntu 12.04.2
- Storage : Netapp iSCSI
I tried to check "logic of deleting volume devices", and I found that
nova had a critical issue in deleting logic.
deleting logic is :
nova/virt/libvirt/volume.py
def _delete_mpath(self, iscsi_properties, multipath_device, ips_iqns):
entries = self._get_iscsi_devices()
# Loop through ips_iqns to construct all paths
iqn_luns = []
for ip, iqn in ips_iqns:
iqn_lun = '%s-lun-%s' % (iqn,
iscsi_properties.get('target_lun', 0))
iqn_luns.append(iqn_lun)
for dev in ['/dev/disk/by-path/%s' % dev for dev in entries]:
for iqn_lun in iqn_luns:
if iqn_lun in dev: <- this logic has problem.
self._delete_device(dev)
my case of problem is :
1. there are 1~19 of lun-id volumes that are attached virtual-machine on same hypervisor.
2. and then, If I detach volume that have lun-id 1.
3. nova-compute is going to delete devices of volumes that have lun-id 11~19 by "_delete_mpath function" in openstack nova.
more information is :
1. problem logic : if iqn_lun in dev:
2. "dev" has all disk on the hypervisor
/dev/disk/by-path/ip-x.x.x.x:3260-iscsi-iqn.xxx.netapp:sn.xxx:vs-lun-1
/dev/disk/by-path/ip-x.x.x.x:3260-iscsi-iqn.xxx.netapp:sn.xxx:vs-lun-10
/dev/disk/by-path/ip-x.x.x.x:3260-iscsi-iqn.xxx.netapp:sn.xxx:vs-lun-11
/dev/disk/by-path/ip-x.x.x.x:3260-iscsi-iqn.xxx.netapp:sn.xxx:vs-lun-12
/dev/disk/by-path/ip-x.x.x.x:3260-iscsi-iqn.xxx.netapp:sn.xxx:vs-lun-2
3. iqn_lun has "iqn.xxx.netapp:sn.xxx:vs-lun-1" that made by below logic.
iqn_lun = '%s-lun-%s' % (iqn, iscsi_properties.get('target_lun', 0))
4. nova-compute try to delete devices that have "iqn.xxx.netapp:sn.xxx:vs-lun-1" by that logic.
/dev/disk/by-path/ip-x.x.x.x:3260-iscsi-iqn.xxx.netapp:sn.xxx:vs-lun-1 <- delete
/dev/disk/by-path/ip-x.x.x.x:3260-iscsi-iqn.xxx.netapp:sn.xxx:vs-lun-10 <- delete
/dev/disk/by-path/ip-x.x.x.x:3260-iscsi-iqn.xxx.netapp:sn.xxx:vs-lun-11 <- delete
/dev/disk/by-path/ip-x.x.x.x:3260-iscsi-iqn.xxx.netapp:sn.xxx:vs-lun-12 <- delete
This Bug is very critical because the nova-compute can delete other vm's devices when this case is occurred,
so that file-system of volume can change read-only.
Actually this case was occurred and my customer's file-system was changed read-only.
I think that nova-compute try to delete device in case of exactly same
lun-id.
Please fix this bug ASAP.
Thank you.
** Affects: nova
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1433378
Title:
Problem deleting volume devices that attached to other VMs, when the
users detach volumes.
Status in OpenStack Compute (Nova):
New
Bug description:
When I detached volume from my virtual-machine, "nova-compute process" deleted volume devices that attached to other virtual-machine.
As you know, when detaching volumes from virtual-machine, nova-compute is going to delete volume devices.
However nova-compute deleted volume devices that attached to other virtual-machine.
So, file-system of those volumes had changed "read-only".
My Environment is following:
- OpenStack Juno 2014.2.1
- Hypervisor : KVM on Ubuntu 12.04.2
- Storage : Netapp iSCSI
I tried to check "logic of deleting volume devices", and I found that
nova had a critical issue in deleting logic.
deleting logic is :
nova/virt/libvirt/volume.py
def _delete_mpath(self, iscsi_properties, multipath_device, ips_iqns):
entries = self._get_iscsi_devices()
# Loop through ips_iqns to construct all paths
iqn_luns = []
for ip, iqn in ips_iqns:
iqn_lun = '%s-lun-%s' % (iqn,
iscsi_properties.get('target_lun', 0))
iqn_luns.append(iqn_lun)
for dev in ['/dev/disk/by-path/%s' % dev for dev in entries]:
for iqn_lun in iqn_luns:
if iqn_lun in dev: <- this logic has problem.
self._delete_device(dev)
my case of problem is :
1. there are 1~19 of lun-id volumes that are attached virtual-machine on same hypervisor.
2. and then, If I detach volume that have lun-id 1.
3. nova-compute is going to delete devices of volumes that have lun-id 11~19 by "_delete_mpath function" in openstack nova.
more information is :
1. problem logic : if iqn_lun in dev:
2. "dev" has all disk on the hypervisor
/dev/disk/by-path/ip-x.x.x.x:3260-iscsi-iqn.xxx.netapp:sn.xxx:vs-lun-1
/dev/disk/by-path/ip-x.x.x.x:3260-iscsi-iqn.xxx.netapp:sn.xxx:vs-lun-10
/dev/disk/by-path/ip-x.x.x.x:3260-iscsi-iqn.xxx.netapp:sn.xxx:vs-lun-11
/dev/disk/by-path/ip-x.x.x.x:3260-iscsi-iqn.xxx.netapp:sn.xxx:vs-lun-12
/dev/disk/by-path/ip-x.x.x.x:3260-iscsi-iqn.xxx.netapp:sn.xxx:vs-lun-2
3. iqn_lun has "iqn.xxx.netapp:sn.xxx:vs-lun-1" that made by below logic.
iqn_lun = '%s-lun-%s' % (iqn, iscsi_properties.get('target_lun', 0))
4. nova-compute try to delete devices that have "iqn.xxx.netapp:sn.xxx:vs-lun-1" by that logic.
/dev/disk/by-path/ip-x.x.x.x:3260-iscsi-iqn.xxx.netapp:sn.xxx:vs-lun-1 <- delete
/dev/disk/by-path/ip-x.x.x.x:3260-iscsi-iqn.xxx.netapp:sn.xxx:vs-lun-10 <- delete
/dev/disk/by-path/ip-x.x.x.x:3260-iscsi-iqn.xxx.netapp:sn.xxx:vs-lun-11 <- delete
/dev/disk/by-path/ip-x.x.x.x:3260-iscsi-iqn.xxx.netapp:sn.xxx:vs-lun-12 <- delete
This Bug is very critical because the nova-compute can delete other vm's devices when this case is occurred,
so that file-system of volume can change read-only.
Actually this case was occurred and my customer's file-system was changed read-only.
I think that nova-compute try to delete device in case of exactly same
lun-id.
Please fix this bug ASAP.
Thank you.
To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1433378/+subscriptions
Follow ups
References