← Back to team overview

yahoo-eng-team team mailing list archive

[Bug 1463856] Re: Cinder volume isn't available after instance soft-deleted timer expired while volume is still attached

 

[Looks like a Nova-related issue, since it is not requesting a volume
detach from Cinder -- so moved this bug to Nova component.]

** Project changed: cinder => nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1463856

Title:
  Cinder volume isn't available after instance soft-deleted timer
  expired while volume is still attached

Status in OpenStack Compute (nova):
  New

Bug description:
  Description of problem:
  There is a feature in nova that allow you to restore a SOFT-DELETED instance (nova restore) when an instance is terminated there is a certain amount of time(defined in nova.conf - reclaim_instance_interval) for one to restore the instance including volume and floating IP attachments. Once this timer expire the instance goes to DELETED status and should release all its resources including the attached volume.
  In this case the volume remain attached to an instance in DELETED state and not usable for a non admin user.

  Version-Release number of selected component (if applicable):
  # rpm -qa | grep -i cinder
  openstack-cinder-2015.1.0-2.el7ost.noarch
  python-cinder-2015.1.0-2.el7ost.noarch
  python-cinderclient-1.1.1-1.el7ost.noarch

  # rpm -qa | grep -i nova
  openstack-nova-cert-2015.1.0-4.el7ost.noarch
  openstack-nova-compute-2015.1.0-4.el7ost.noarch
  openstack-nova-common-2015.1.0-4.el7ost.noarch
  python-nova-2015.1.0-4.el7ost.noarch
  openstack-nova-conductor-2015.1.0-4.el7ost.noarch
  openstack-nova-scheduler-2015.1.0-4.el7ost.noarch
  openstack-nova-console-2015.1.0-4.el7ost.noarch
  python-novaclient-2.23.0-1.el7ost.noarch
  openstack-nova-api-2015.1.0-4.el7ost.noarch
  openstack-nova-novncproxy-2015.1.0-4.el7ost.noarch

  
  How reproducible:
  100%

  Steps to Reproduce:
  1. Edit /etc/nova/nova.conf and change reclaim_instance_interval=60
  2. Restart nova service
  3. Create a volume and attach it to an instance
  4. Delete instance - make sure its in "SOFT-DELETE" state
  5. Wait for the timer to expire and make sure the instance is in "DELETED" state
  6. Volume still shown as attached in CLI and in Horizon Its shown as attached to "None"

  Actual results:
  Volume is not usable

  Expected results:
  Volume should be released and usable

  Additional info:
  Attaching cinder and nova log dir

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1463856/+subscriptions