yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #72245
[Bug 1762687] [NEW] Concurrent requests to attach the same non-multiattach volume to multiple instances can succeed
Public bug reported:
Description
===========
Discovered this by chance yesterday, at first glance this appears to be
due to a lack of locking within c-api when we initial create the
attachments and no additional validation when we update with the
connector later in the attach flow. Reporting this against both nova and
cinder for now.
$ nova volume-attach 77c092c2-9664-42b2-bf71-277b1bfad707 b4240f39-da7a-4372-b4ca-15a0c6121ac8 & nova volume-attach 91a7c490-5a9a-4048-a109-b1159b7f0e79 b4240f39-da7a-4372-b4ca-15a0c6121ac8
[1] 24949
+----------+--------------------------------------+
| Property | Value |
+----------+--------------------------------------+
| device | /dev/vdb |
| id | b4240f39-da7a-4372-b4ca-15a0c6121ac8 |
| serverId | 91a7c490-5a9a-4048-a109-b1159b7f0e79 |
| volumeId | b4240f39-da7a-4372-b4ca-15a0c6121ac8 |
+----------+--------------------------------------+
+----------+--------------------------------------+
| Property | Value |
+----------+--------------------------------------+
| device | /dev/vdb |
| id | b4240f39-da7a-4372-b4ca-15a0c6121ac8 |
| serverId | 77c092c2-9664-42b2-bf71-277b1bfad707 |
| volumeId | b4240f39-da7a-4372-b4ca-15a0c6121ac8 |
+----------+--------------------------------------+
$ cinder show b4240f39-da7a-4372-b4ca-15a0c6121ac8
+--------------------------------+----------------------------------------------------------------------------------+
| Property | Value |
+--------------------------------+----------------------------------------------------------------------------------+
| attached_servers | ['91a7c490-5a9a-4048-a109-b1159b7f0e79', '77c092c2-9664-42b2-bf71-277b1bfad707'] |
| attachment_ids | ['31b8c16f-07d0-4f0c-95d8-c56797a270dc', 'a7eb9cb1-b7be-44e3-a176-3c6989459aaa'] |
| availability_zone | nova |
| bootable | false |
| consistencygroup_id | None |
| created_at | 2018-04-09T19:02:46.000000 |
| description | None |
| encrypted | False |
| id | b4240f39-da7a-4372-b4ca-15a0c6121ac8 |
| metadata | attached_mode : rw |
| migration_status | None |
| multiattach | False |
| name | None |
| os-vol-host-attr:host | test.example.com@lvmdriver-1#lvmdriver-1 |
| os-vol-mig-status-attr:migstat | None |
| os-vol-mig-status-attr:name_id | None |
| os-vol-tenant-attr:tenant_id | fe3128ecf4704369ae3f7ede03f6bc29 |
| replication_status | None |
| size | 1 |
| snapshot_id | None |
| source_volid | None |
| status | in-use |
| updated_at | 2018-04-09T19:04:24.000000 |
| user_id | 57293b0839da449580ce7008c8734c1c |
| volume_type | lvmdriver-1 |
+--------------------------------+----------------------------------------------------------------------------------+
$ ll /dev/disk/by-path/ip-192.168.122.79:3260-iscsi-iqn.2010-10.org.openstack:volume-b4240f39-da7a-4372-b4ca-15a0c6121ac8-lun-0
lrwxrwxrwx. 1 root root 9 Apr 9 15:04 /dev/disk/by-path/ip-192.168.122.79:3260-iscsi-iqn.2010-10.org.openstack:volume-b4240f39-da7a-4372-b4ca-15a0c6121ac8-lun-0 -> ../../sdc
$ sudo virsh domblklist 77c092c2-9664-42b2-bf71-277b1bfad707
Target Source
------------------------------------------------
vda /opt/stack/data/nova/instances/77c092c2-9664-42b2-bf71-277b1bfad707/disk
vdb /dev/sdc
$ sudo virsh domblklist 91a7c490-5a9a-4048-a109-b1159b7f0e79
Target Source
------------------------------------------------
vda /opt/stack/data/nova/instances/91a7c490-5a9a-4048-a109-b1159b7f0e79/disk
vdb /dev/sdc
Steps to reproduce
==================
$ nova volume-attach $instance_1_uuid $volume_uuid & nova volume-attach $instance_2_uuid $volume_uuid
Expected result
===============
Only one of the requests succeeds.
Actual result
=============
Both requests succeed.
Environment
===========
1. Exact version of OpenStack you are running. See the following
list for all releases: http://docs.openstack.org/releases/
$ cd ../nova
$ git describe --tags
17.0.0.0rc1-656-gebb4817ce3
$ cd ../cinder/
$ git describe --tags
12.0.0.0rc1-365-ga8a9dda30
2. Which hypervisor did you use?
(For example: Libvirt + KVM, Libvirt + XEN, Hyper-V, PowerKVM, ...)
What's the version of that?
Libvirt + KVM
2. Which storage type did you use?
(For example: Ceph, LVM, GPFS, ...)
What's the version of that?
LVM + iSCSI
3. Which networking type did you use?
(For example: nova-network, Neutron with OpenVSwitch, ...)
N/A
** Affects: cinder
Importance: Undecided
Status: New
** Affects: nova
Importance: Undecided
Assignee: Lee Yarwood (lyarwood)
Status: New
** Tags: cinder volumes
** Description changed:
Description
===========
- Discovered this by chase earlier today, at first glance this appears to
- be due to a lack of locking within c-api when we initial create the
+ Discovered this by chance yesterday, at first glance this appears to be
+ due to a lack of locking within c-api when we initial create the
attachments and no additional validation when we update with the
connector later in the attach flow. Reporting this against both nova and
cinder for now.
$ nova volume-attach 77c092c2-9664-42b2-bf71-277b1bfad707 b4240f39-da7a-4372-b4ca-15a0c6121ac8 & nova volume-attach 91a7c490-5a9a-4048-a109-b1159b7f0e79 b4240f39-da7a-4372-b4ca-15a0c6121ac8
[1] 24949
+----------+--------------------------------------+
| Property | Value |
+----------+--------------------------------------+
| device | /dev/vdb |
| id | b4240f39-da7a-4372-b4ca-15a0c6121ac8 |
| serverId | 91a7c490-5a9a-4048-a109-b1159b7f0e79 |
| volumeId | b4240f39-da7a-4372-b4ca-15a0c6121ac8 |
+----------+--------------------------------------+
+----------+--------------------------------------+
| Property | Value |
+----------+--------------------------------------+
| device | /dev/vdb |
| id | b4240f39-da7a-4372-b4ca-15a0c6121ac8 |
| serverId | 77c092c2-9664-42b2-bf71-277b1bfad707 |
| volumeId | b4240f39-da7a-4372-b4ca-15a0c6121ac8 |
+----------+--------------------------------------+
$ cinder show b4240f39-da7a-4372-b4ca-15a0c6121ac8
+--------------------------------+----------------------------------------------------------------------------------+
| Property | Value |
+--------------------------------+----------------------------------------------------------------------------------+
| attached_servers | ['91a7c490-5a9a-4048-a109-b1159b7f0e79', '77c092c2-9664-42b2-bf71-277b1bfad707'] |
| attachment_ids | ['31b8c16f-07d0-4f0c-95d8-c56797a270dc', 'a7eb9cb1-b7be-44e3-a176-3c6989459aaa'] |
| availability_zone | nova |
| bootable | false |
| consistencygroup_id | None |
| created_at | 2018-04-09T19:02:46.000000 |
| description | None |
| encrypted | False |
| id | b4240f39-da7a-4372-b4ca-15a0c6121ac8 |
| metadata | attached_mode : rw |
| migration_status | None |
| multiattach | False |
| name | None |
| os-vol-host-attr:host | test.example.com@lvmdriver-1#lvmdriver-1 |
| os-vol-mig-status-attr:migstat | None |
| os-vol-mig-status-attr:name_id | None |
| os-vol-tenant-attr:tenant_id | fe3128ecf4704369ae3f7ede03f6bc29 |
| replication_status | None |
| size | 1 |
| snapshot_id | None |
| source_volid | None |
| status | in-use |
| updated_at | 2018-04-09T19:04:24.000000 |
| user_id | 57293b0839da449580ce7008c8734c1c |
| volume_type | lvmdriver-1 |
+--------------------------------+----------------------------------------------------------------------------------+
$ ll /dev/disk/by-path/ip-192.168.122.79:3260-iscsi-iqn.2010-10.org.openstack:volume-b4240f39-da7a-4372-b4ca-15a0c6121ac8-lun-0
lrwxrwxrwx. 1 root root 9 Apr 9 15:04 /dev/disk/by-path/ip-192.168.122.79:3260-iscsi-iqn.2010-10.org.openstack:volume-b4240f39-da7a-4372-b4ca-15a0c6121ac8-lun-0 -> ../../sdc
$ sudo virsh domblklist 77c092c2-9664-42b2-bf71-277b1bfad707
Target Source
------------------------------------------------
vda /opt/stack/data/nova/instances/77c092c2-9664-42b2-bf71-277b1bfad707/disk
vdb /dev/sdc
$ sudo virsh domblklist 91a7c490-5a9a-4048-a109-b1159b7f0e79
Target Source
------------------------------------------------
vda /opt/stack/data/nova/instances/91a7c490-5a9a-4048-a109-b1159b7f0e79/disk
vdb /dev/sdc
Steps to reproduce
==================
$ nova volume-attach $instance_1_uuid $volume_uuid & nova volume-attach $instance_2_uuid $volume_uuid
Expected result
===============
Only one of the requests succeeds.
Actual result
=============
Both requests succeed.
Environment
===========
1. Exact version of OpenStack you are running. See the following
- list for all releases: http://docs.openstack.org/releases/
+ list for all releases: http://docs.openstack.org/releases/
$ cd ../nova
$ git describe --tags
17.0.0.0rc1-656-gebb4817ce3
$ cd ../cinder/
$ git describe --tags
12.0.0.0rc1-365-ga8a9dda30
2. Which hypervisor did you use?
- (For example: Libvirt + KVM, Libvirt + XEN, Hyper-V, PowerKVM, ...)
- What's the version of that?
+ (For example: Libvirt + KVM, Libvirt + XEN, Hyper-V, PowerKVM, ...)
+ What's the version of that?
Libvirt + KVM
2. Which storage type did you use?
- (For example: Ceph, LVM, GPFS, ...)
- What's the version of that?
+ (For example: Ceph, LVM, GPFS, ...)
+ What's the version of that?
LVM + iSCSI
3. Which networking type did you use?
- (For example: nova-network, Neutron with OpenVSwitch, ...)
+ (For example: nova-network, Neutron with OpenVSwitch, ...)
N/A
** Also affects: cinder
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1762687
Title:
Concurrent requests to attach the same non-multiattach volume to
multiple instances can succeed
Status in Cinder:
New
Status in OpenStack Compute (nova):
New
Bug description:
Description
===========
Discovered this by chance yesterday, at first glance this appears to
be due to a lack of locking within c-api when we initial create the
attachments and no additional validation when we update with the
connector later in the attach flow. Reporting this against both nova
and cinder for now.
$ nova volume-attach 77c092c2-9664-42b2-bf71-277b1bfad707 b4240f39-da7a-4372-b4ca-15a0c6121ac8 & nova volume-attach 91a7c490-5a9a-4048-a109-b1159b7f0e79 b4240f39-da7a-4372-b4ca-15a0c6121ac8
[1] 24949
+----------+--------------------------------------+
| Property | Value |
+----------+--------------------------------------+
| device | /dev/vdb |
| id | b4240f39-da7a-4372-b4ca-15a0c6121ac8 |
| serverId | 91a7c490-5a9a-4048-a109-b1159b7f0e79 |
| volumeId | b4240f39-da7a-4372-b4ca-15a0c6121ac8 |
+----------+--------------------------------------+
+----------+--------------------------------------+
| Property | Value |
+----------+--------------------------------------+
| device | /dev/vdb |
| id | b4240f39-da7a-4372-b4ca-15a0c6121ac8 |
| serverId | 77c092c2-9664-42b2-bf71-277b1bfad707 |
| volumeId | b4240f39-da7a-4372-b4ca-15a0c6121ac8 |
+----------+--------------------------------------+
$ cinder show b4240f39-da7a-4372-b4ca-15a0c6121ac8
+--------------------------------+----------------------------------------------------------------------------------+
| Property | Value |
+--------------------------------+----------------------------------------------------------------------------------+
| attached_servers | ['91a7c490-5a9a-4048-a109-b1159b7f0e79', '77c092c2-9664-42b2-bf71-277b1bfad707'] |
| attachment_ids | ['31b8c16f-07d0-4f0c-95d8-c56797a270dc', 'a7eb9cb1-b7be-44e3-a176-3c6989459aaa'] |
| availability_zone | nova |
| bootable | false |
| consistencygroup_id | None |
| created_at | 2018-04-09T19:02:46.000000 |
| description | None |
| encrypted | False |
| id | b4240f39-da7a-4372-b4ca-15a0c6121ac8 |
| metadata | attached_mode : rw |
| migration_status | None |
| multiattach | False |
| name | None |
| os-vol-host-attr:host | test.example.com@lvmdriver-1#lvmdriver-1 |
| os-vol-mig-status-attr:migstat | None |
| os-vol-mig-status-attr:name_id | None |
| os-vol-tenant-attr:tenant_id | fe3128ecf4704369ae3f7ede03f6bc29 |
| replication_status | None |
| size | 1 |
| snapshot_id | None |
| source_volid | None |
| status | in-use |
| updated_at | 2018-04-09T19:04:24.000000 |
| user_id | 57293b0839da449580ce7008c8734c1c |
| volume_type | lvmdriver-1 |
+--------------------------------+----------------------------------------------------------------------------------+
$ ll /dev/disk/by-path/ip-192.168.122.79:3260-iscsi-iqn.2010-10.org.openstack:volume-b4240f39-da7a-4372-b4ca-15a0c6121ac8-lun-0
lrwxrwxrwx. 1 root root 9 Apr 9 15:04 /dev/disk/by-path/ip-192.168.122.79:3260-iscsi-iqn.2010-10.org.openstack:volume-b4240f39-da7a-4372-b4ca-15a0c6121ac8-lun-0 -> ../../sdc
$ sudo virsh domblklist 77c092c2-9664-42b2-bf71-277b1bfad707
Target Source
------------------------------------------------
vda /opt/stack/data/nova/instances/77c092c2-9664-42b2-bf71-277b1bfad707/disk
vdb /dev/sdc
$ sudo virsh domblklist 91a7c490-5a9a-4048-a109-b1159b7f0e79
Target Source
------------------------------------------------
vda /opt/stack/data/nova/instances/91a7c490-5a9a-4048-a109-b1159b7f0e79/disk
vdb /dev/sdc
Steps to reproduce
==================
$ nova volume-attach $instance_1_uuid $volume_uuid & nova volume-attach $instance_2_uuid $volume_uuid
Expected result
===============
Only one of the requests succeeds.
Actual result
=============
Both requests succeed.
Environment
===========
1. Exact version of OpenStack you are running. See the following
list for all releases: http://docs.openstack.org/releases/
$ cd ../nova
$ git describe --tags
17.0.0.0rc1-656-gebb4817ce3
$ cd ../cinder/
$ git describe --tags
12.0.0.0rc1-365-ga8a9dda30
2. Which hypervisor did you use?
(For example: Libvirt + KVM, Libvirt + XEN, Hyper-V, PowerKVM, ...)
What's the version of that?
Libvirt + KVM
2. Which storage type did you use?
(For example: Ceph, LVM, GPFS, ...)
What's the version of that?
LVM + iSCSI
3. Which networking type did you use?
(For example: nova-network, Neutron with OpenVSwitch, ...)
N/A
To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1762687/+subscriptions
Follow ups