← Back to team overview

yahoo-eng-team team mailing list archive

[Bug 1631692] [NEW] After unrescue a instance,del a detached rbd volume will go wrong

 

Public bug reported:

It's not the rescue image volume,it‘s the instance attach volume(ceph rbd device,maybe vdb) before rescue. 
After unrescue the instance,if try to del a detached the volume vbd will go wrong.
Specifically:
1.the ceph rdb watchers is still watcher
   use the cmd-line:  rbd status volumes/volume-uuid  you will see that .
   When in this state,the rbd cannot be rm
2.the database was updated the volumes info,status->available,attach_status->detached.
3.the instance still can see and use this rbd ,you can format、mount and r/w the disk.But if you reboot the instance , this disk will disappear.

the tempest case and error report:
tempest.api.compute.servers.test_server_rescue_negative.ServerRescueNegativeTestJSON.test_rescued_vm_detach_volume
ERROR cinder.volume.manager [req-xxx] Unable to delete busy volume

Reproduce the steps:
1.nova volume-attach instance-uuid volume-uuid
2.nova rescue --password admin --image image-uuid instance-uuid
3.nova unrescue instance-uuid
4.nova volume-detach instance-uuid volume-uuid
  and you can see the ceph rdb status and check the rbd still watcher or not
5.if the rbd still watcher ,any operation to del the volume , will fail。

** Affects: nova
     Importance: Undecided
         Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1631692

Title:
  After unrescue a instance,del a detached rbd volume  will go wrong

Status in OpenStack Compute (nova):
  New

Bug description:
  It's not the rescue image volume,it‘s the instance attach volume(ceph rbd device,maybe vdb) before rescue. 
  After unrescue the instance,if try to del a detached the volume vbd will go wrong.
  Specifically:
  1.the ceph rdb watchers is still watcher
     use the cmd-line:  rbd status volumes/volume-uuid  you will see that .
     When in this state,the rbd cannot be rm
  2.the database was updated the volumes info,status->available,attach_status->detached.
  3.the instance still can see and use this rbd ,you can format、mount and r/w the disk.But if you reboot the instance , this disk will disappear.

  the tempest case and error report:
  tempest.api.compute.servers.test_server_rescue_negative.ServerRescueNegativeTestJSON.test_rescued_vm_detach_volume
  ERROR cinder.volume.manager [req-xxx] Unable to delete busy volume

  Reproduce the steps:
  1.nova volume-attach instance-uuid volume-uuid
  2.nova rescue --password admin --image image-uuid instance-uuid
  3.nova unrescue instance-uuid
  4.nova volume-detach instance-uuid volume-uuid
    and you can see the ceph rdb status and check the rbd still watcher or not
  5.if the rbd still watcher ,any operation to del the volume , will fail。

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1631692/+subscriptions