← Back to team overview

yahoo-eng-team team mailing list archive

[Bug 1471271] [NEW] Volume detach leaves volume attached to instance on start/rebuild/reboot

 

Public bug reported:

When starting/restarting/rebuilding instances, it may happen that a
volume detach request comes right in the middle of attaching a volume in
the driver. In this case the hypervisor (e.g. libvirt) will throw
DiskNotFound exception in driver.detach_volume() call, but still  the
volume gets attached eventually, when the instance starts.

This leaves the instance in the state, when the volume is `de-facto`
attached to it (i.e. shown in the `virsh dumpxmp $instance` output for
libvirt), but both Nova and Cinder think the volume is actually *not*
in-use.

Steps to reproduce:

1. Create an instance and attach a volume to it.
2. Stop the instance.
3. Start the instance and send a couple of volume-detach requests in a row, like:

   nova start demo && nova volume-detach demo $volume_id || nova volume-detach demo $volume_id || nova volume-detach demo $volume_id
 || nova volume-detach demo $volume_id

4. Check the cinder list, nova show $inst, virsh dumpxml $inst output.

Expected result:

Both cinder list and nova show report volume is not in-use anymore.
There is no volume related elements in virsh dumpxml output.

Actual result:

Both cinder list and nova show report volume is not in-use anymore. But
virsh dumpxml shows that the volume is still attached to the instance.

** Affects: nova
     Importance: Undecided
     Assignee: Roman Podoliaka (rpodolyaka)
         Status: New

** Description changed:

- When starting/restarting/rebuilding instances, it may happen that a volume detach request comes right in the middle of attaching a volume in the driver. In this case the hypervisor (e.g. libvirt) will throw DiskNotFound exception in driver.detach_volume() call, but still  the volume gets attached eventually, when the instance starts.
-     
- This leaves the instance in the state, when the volume is `de-facto` attached to it (i.e. shown in the `virsh dumpxmp $instance` output for libvirt), but both Nova and Cinder think the volume is actually *not* in-use.
+ When starting/restarting/rebuilding instances, it may happen that a
+ volume detach request comes right in the middle of attaching a volume in
+ the driver. In this case the hypervisor (e.g. libvirt) will throw
+ DiskNotFound exception in driver.detach_volume() call, but still  the
+ volume gets attached eventually, when the instance starts.
+ 
+ This leaves the instance in the state, when the volume is `de-facto`
+ attached to it (i.e. shown in the `virsh dumpxmp $instance` output for
+ libvirt), but both Nova and Cinder think the volume is actually *not*
+ in-use.
+ 
+ Steps to reproduce:
+ 
+ 1. Create an instance and attach a volume to it.
+ 2. Stop the instance.
+ 3. Start the instance and send a couple of volume-detach requests in a row, like:
+ 
+    nova start demo && nova volume-detach demo $volume_id || nova volume-detach demo $volume_id || nova volume-detach demo $volume_id
+  || nova volume-detach demo $volume_id
+ 
+ 4. Check the cinder list, nova show $inst, virsh dumpxml $inst output.
+ 
+ Expected result:
+ 
+ Both cinder list and nova show report volume is not in-use anymore.
+ There is no volume related elements in virsh dumpxml output.
+ 
+ Actual result:
+ 
+ Both cinder list and nova show report volume is not in-use anymore. But
+ virsh dumpxml shows that the volume is still attached to the instance.

** Changed in: nova
     Assignee: (unassigned) => Roman Podoliaka (rpodolyaka)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1471271

Title:
  Volume detach leaves volume attached to instance on
  start/rebuild/reboot

Status in OpenStack Compute (Nova):
  New

Bug description:
  When starting/restarting/rebuilding instances, it may happen that a
  volume detach request comes right in the middle of attaching a volume
  in the driver. In this case the hypervisor (e.g. libvirt) will throw
  DiskNotFound exception in driver.detach_volume() call, but still  the
  volume gets attached eventually, when the instance starts.

  This leaves the instance in the state, when the volume is `de-facto`
  attached to it (i.e. shown in the `virsh dumpxmp $instance` output for
  libvirt), but both Nova and Cinder think the volume is actually *not*
  in-use.

  Steps to reproduce:

  1. Create an instance and attach a volume to it.
  2. Stop the instance.
  3. Start the instance and send a couple of volume-detach requests in a row, like:

     nova start demo && nova volume-detach demo $volume_id || nova volume-detach demo $volume_id || nova volume-detach demo $volume_id
   || nova volume-detach demo $volume_id

  4. Check the cinder list, nova show $inst, virsh dumpxml $inst output.

  Expected result:

  Both cinder list and nova show report volume is not in-use anymore.
  There is no volume related elements in virsh dumpxml output.

  Actual result:

  Both cinder list and nova show report volume is not in-use anymore.
  But virsh dumpxml shows that the volume is still attached to the
  instance.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1471271/+subscriptions


Follow ups

References