← Back to team overview

yahoo-eng-team team mailing list archive

[Bug 1672624] [NEW] Ceph volumes attached to local deleted instance could not be correctly handled

 

Public bug reported:

How to reproduce:
1. Launch an instance.
2. Create a volume with ceph backend.
3. Attach the volume created in step 3.
4. Kill nova-compute
5. Delete the instance, this will go to local_delete
6. Check volumes status using "cinder list", the volume is in "available" status
7. Try to delete the volume, failed:
2017-03-14 11:40:41.050 DEBUG oslo_messaging._drivers.amqpdriver mreceived message with unique_id: 061b4f9b52aa425d97811c066133b170 from (pid=480) __call__ /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:215 
15:05 2017-03-14 11:40:41.056 DEBUG cinder.coordination req-774b4680-d861-4e16-bad4-a032ff0b3579  None  Lock "7c7d03d9-3244-4923-b72e-459677ee48aa-delete_volume" acquired by "delete_volume" :: waited 0.000s from (pid=480) _synchronized /opt/stack/cinder/cinder/coordination.py:300 
15:05 2017-03-14 11:40:41.155 DEBUG cinder.volume.drivers.rbd req-774b4680-d861-4e16-bad4-a032ff0b3579  00;36madmin None connecting to ceph (timeout=-1). from (pid=480) _connect_to_rados /opt/stack/cinder/cinder/volume/drivers/rbd.py:299 
15:05 2017-03-14 11:40:42.376 DEBUG cinder.volume.drivers.rbd req-774b4680-d861-4e16-bad4-a032ff0b3579   None volume has no backup snaps from (pid=480) _delete_backup_snaps /opt/stack/cinder/cinder/volume/drivers/rbd.py:660 
15:05 2017-03-14 11:40:42.377 DEBUG cinder.volume.drivers.rbd req-774b4680-d861-4e16-bad4-a032ff0b3579 admin None Volume volume-7c7d03d9-3244-4923-b72e-459677ee48aa is not a clone.   from (pid=480) _get_clone_info /opt/stack/cinder/cinder/volume/drivers/rbd.py:683 
15:06 2017-03-14 11:40:42.382 DEBUG cinder.volume.drivers.rbd req-774b4680-d861-4e16-bad4-a032ff0b3579   None deleting rbd volume volume-7c7d03d9-3244-4923-b72e-459677ee48aa   from (pid=480) delete_volume /opt/stack/cinder/cinder/volume/drivers/rbd.py:781 
15:06 2017-03-14 11:40:42.570 DEBUG cinder.utils req-774b4680-d861-4e16-bad4-a032ff0b3579 admin None Failed attempt 1 from (pid=480) _print_stop /opt/stack/cinder/cinder/utils.py:780 
...
15:07 2017-03-14 11:41:12.950 WARNING cinder.volume.drivers.rbd req-774b4680-d861-4e16-bad4-a032ff0b3579 ^admin NoneImageBusy error raised while deleting rbd volume. This may have been caused by a connection from a client that has crashed and, if so, may be resolved by retrying the delete after 30 seconds has elapsed.
15:07 2017-03-14 11:41:12.955 ERROR cinder.volume.manager req-774b4680-d861-4e16-bad4-a032ff0b3579 admin None^Unable to delete busy volume.

** Affects: cinder
     Importance: Undecided
         Status: New

** Affects: nova
     Importance: Undecided
         Status: New

** Summary changed:

- Volumes attached to local deleted ceph volume could not be correctly handled
+ Ceph volumes attached to local deleted instance could not be correctly handled

** Description changed:

  How to reproduce:
  1. Launch an instance.
  2. Create a volume with ceph backend.
  3. Attach the volume created in step 3.
  4. Kill nova-compute
  5. Delete the instance, this will go to local_delete
  6. Check volumes status using "cinder list", the volume is in "available" status
  7. Try to delete the volume, failed:
+ 2017-03-14 11:40:41.050 DEBUG oslo_messaging._drivers.amqpdriver mreceived message with unique_id: 061b4f9b52aa425d97811c066133b170 from (pid=480) __call__ /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:215 
+ 15:05 2017-03-14 11:40:41.056 DEBUG cinder.coordination req-774b4680-d861-4e16-bad4-a032ff0b3579  None  Lock "7c7d03d9-3244-4923-b72e-459677ee48aa-delete_volume" acquired by "delete_volume" :: waited 0.000s from (pid=480) _synchronized /opt/stack/cinder/cinder/coordination.py:300 
+ 15:05 2017-03-14 11:40:41.155 DEBUG cinder.volume.drivers.rbd req-774b4680-d861-4e16-bad4-a032ff0b3579  00;36madmin None connecting to ceph (timeout=-1). from (pid=480) _connect_to_rados /opt/stack/cinder/cinder/volume/drivers/rbd.py:299 
+ 15:05 2017-03-14 11:40:42.376 DEBUG cinder.volume.drivers.rbd req-774b4680-d861-4e16-bad4-a032ff0b3579   None volume has no backup snaps from (pid=480) _delete_backup_snaps /opt/stack/cinder/cinder/volume/drivers/rbd.py:660 
+ 15:05 2017-03-14 11:40:42.377 DEBUG cinder.volume.drivers.rbd req-774b4680-d861-4e16-bad4-a032ff0b3579 admin None Volume volume-7c7d03d9-3244-4923-b72e-459677ee48aa is not a clone.   from (pid=480) _get_clone_info /opt/stack/cinder/cinder/volume/drivers/rbd.py:683 
+ 15:06 2017-03-14 11:40:42.382 DEBUG cinder.volume.drivers.rbd req-774b4680-d861-4e16-bad4-a032ff0b3579   None deleting rbd volume volume-7c7d03d9-3244-4923-b72e-459677ee48aa   from (pid=480) delete_volume /opt/stack/cinder/cinder/volume/drivers/rbd.py:781 
+ 15:06 2017-03-14 11:40:42.570 DEBUG cinder.utils req-774b4680-d861-4e16-bad4-a032ff0b3579 admin None Failed attempt 1 from (pid=480) _print_stop /opt/stack/cinder/cinder/utils.py:780 
+ ...
+ 15:07 2017-03-14 11:41:12.950 WARNING cinder.volume.drivers.rbd req-774b4680-d861-4e16-bad4-a032ff0b3579 ^admin NoneImageBusy error raised while deleting rbd volume. This may have been caused by a connection from a client that has crashed and, if so, may be resolved by retrying the delete after 30 seconds has elapsed.
+ 15:07 2017-03-14 11:41:12.955 ERROR cinder.volume.manager req-774b4680-d861-4e16-bad4-a032ff0b3579 admin None^Unable to delete busy volume.

** Also affects: cinder
   Importance: Undecided
       Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1672624

Title:
  Ceph volumes attached to local deleted instance could not be correctly
  handled

Status in Cinder:
  New
Status in OpenStack Compute (nova):
  New

Bug description:
  How to reproduce:
  1. Launch an instance.
  2. Create a volume with ceph backend.
  3. Attach the volume created in step 3.
  4. Kill nova-compute
  5. Delete the instance, this will go to local_delete
  6. Check volumes status using "cinder list", the volume is in "available" status
  7. Try to delete the volume, failed:
  2017-03-14 11:40:41.050 DEBUG oslo_messaging._drivers.amqpdriver mreceived message with unique_id: 061b4f9b52aa425d97811c066133b170 from (pid=480) __call__ /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:215 
  15:05 2017-03-14 11:40:41.056 DEBUG cinder.coordination req-774b4680-d861-4e16-bad4-a032ff0b3579  None  Lock "7c7d03d9-3244-4923-b72e-459677ee48aa-delete_volume" acquired by "delete_volume" :: waited 0.000s from (pid=480) _synchronized /opt/stack/cinder/cinder/coordination.py:300 
  15:05 2017-03-14 11:40:41.155 DEBUG cinder.volume.drivers.rbd req-774b4680-d861-4e16-bad4-a032ff0b3579  00;36madmin None connecting to ceph (timeout=-1). from (pid=480) _connect_to_rados /opt/stack/cinder/cinder/volume/drivers/rbd.py:299 
  15:05 2017-03-14 11:40:42.376 DEBUG cinder.volume.drivers.rbd req-774b4680-d861-4e16-bad4-a032ff0b3579   None volume has no backup snaps from (pid=480) _delete_backup_snaps /opt/stack/cinder/cinder/volume/drivers/rbd.py:660 
  15:05 2017-03-14 11:40:42.377 DEBUG cinder.volume.drivers.rbd req-774b4680-d861-4e16-bad4-a032ff0b3579 admin None Volume volume-7c7d03d9-3244-4923-b72e-459677ee48aa is not a clone.   from (pid=480) _get_clone_info /opt/stack/cinder/cinder/volume/drivers/rbd.py:683 
  15:06 2017-03-14 11:40:42.382 DEBUG cinder.volume.drivers.rbd req-774b4680-d861-4e16-bad4-a032ff0b3579   None deleting rbd volume volume-7c7d03d9-3244-4923-b72e-459677ee48aa   from (pid=480) delete_volume /opt/stack/cinder/cinder/volume/drivers/rbd.py:781 
  15:06 2017-03-14 11:40:42.570 DEBUG cinder.utils req-774b4680-d861-4e16-bad4-a032ff0b3579 admin None Failed attempt 1 from (pid=480) _print_stop /opt/stack/cinder/cinder/utils.py:780 
  ...
  15:07 2017-03-14 11:41:12.950 WARNING cinder.volume.drivers.rbd req-774b4680-d861-4e16-bad4-a032ff0b3579 ^admin NoneImageBusy error raised while deleting rbd volume. This may have been caused by a connection from a client that has crashed and, if so, may be resolved by retrying the delete after 30 seconds has elapsed.
  15:07 2017-03-14 11:41:12.955 ERROR cinder.volume.manager req-774b4680-d861-4e16-bad4-a032ff0b3579 admin None^Unable to delete busy volume.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1672624/+subscriptions


Follow ups