yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #11549
[Bug 1291835] [NEW] Repeated volume attche can cause u'message': u'The supplied device path (/dev/vdc) is in use.'
You have been subscribed to a public bug:
If you attach and detach the same volume to same server in loop, the
n-api may report the device name is already use.
I used the following stress test https://review.openstack.org/#/c/77196/.
With the blow configuration.
[{"action": "tempest.stress.actions.volume_attach_verify.VolumeVerifyStress",
"threads": 1,
"use_admin": false,
"use_isolated_tenants": false,
"kwargs": {"vm_extra_args": {},
"new_volume": false,
"new_server": false,
"ssh_test_before_attach": false,
"enable_ssh_verify": false}
}
]
The issue happens with all config options, but this is the fastest way.
The issue can happen even after the device disappearance confirmed via
ssh, ie. not listed in /proc/partitions anymore.
I used similar devstack setup as the gate uses with multiple nova
api/cond workers.
NOTE: libvirt/qemu/linux disregards the device name.
For reproducing the issue
1. add tempest to enabled devstack services.
2. apply the https://review.openstack.org/#/c/77196 locally
3. change the logging options in the tempst.conf [DEFAULT]log_config_append=etc/logging.conf.sample
4. ./tempest/stress/run_stress.py -t tempest/stress/etc/volume-attach-verify.json -n 128 -S
If 128 attempt is not enough, you can increase the number of threads (in
the json config) or the attempts as cli option.
** Affects: nova
Importance: Undecided
Status: New
--
Repeated volume attche can cause u'message': u'The supplied device path (/dev/vdc) is in use.'
https://bugs.launchpad.net/bugs/1291835
You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova).