← Back to team overview

yahoo-eng-team team mailing list archive

[Bug 1937084] Re: Nova thinks deleted volume is still attached

 

Reviewed:  https://review.opendev.org/c/openstack/nova/+/812127
Committed: https://opendev.org/openstack/nova/commit/067cd93424ea1e62c77744986a5479d1b99b0ffe
Submitter: "Zuul (22348)"
Branch:    master

commit 067cd93424ea1e62c77744986a5479d1b99b0ffe
Author: Lee Yarwood <lyarwood@xxxxxxxxxx>
Date:   Fri Oct 1 12:21:57 2021 +0100

    block_device: Ignore VolumeAttachmentNotFound during detach
    
    Bug #1937084 details a race condition within Cinder where requests to
    delete an attachment and later delete the underlying volume can race
    leading to the initial request returning a 404 if the volume delete
    completes first.
    
    This change attempts to handle this within Nova during a detach as we
    ultimately don't care that the volume and/or volume attachment are no
    longer available within Cinder. This allows Nova to complete its' own
    cleanup of the BlockDeviceMapping record resulting in the volume no
    longer appearing attached in Nova's APIs.
    
    Closes-Bug: #1937084
    
    Change-Id: I191552652d8ff5206abad7558c99bce27979dc84


** Changed in: nova
       Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1937084

Title:
  Nova thinks deleted volume is still attached

Status in Cinder:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  There are cases where a cinder volume no longer exists yet nova still
  thinks it is attached to an instance and we cannot detach it anymore.

  This has been observed when running cinder-csi, where it makes a
  volume delete request as soon as  the volume status says its
  available.

  This is a cinder race condition, and like most race conditions is not
  simple to explain.

  Some context on the issue:

  - Cinder API uses the volume "status" field as a locking mechanism to prevent concurrent request processing on the same volume.
  - Most cinder operations are asynchronous, so the API returns before the operation has been completed by the cinder-volume service, but the attachment operations such as creating/updating/deleting an attachment are synchronous, so the API only returns to the caller after the cinder-volume service has completed the operation.
  - Our current code **incorrectly** modifies the status of the volume both on the cinder-volume and the cinder-api services on the attachment delete operation.

  The actual set of events that leads to the issue reported in this BZ
  are:

  [Cinder-CSI]
  - Requests Nova to detach volume (Request R1)

  [Nova]
  - R1: Asks cinder-api to delete the attachment and **waits**

  [Cinder-API]
  - R1: Checks the status of the volume
  - R1: Sends terminate connection request (R1) to cinder-volume and **waits**

  [Cinder-Volume]
  - R1: Ask the driver to terminate the connection
  - R1: The driver asks the backend to unmap and unexport the volume
  - R1: The status of the volume is changed in the DB to "available"

  [Cinder-CSI]
  - Asks Cinder to delete the volume (Request R2)

  [Cinder-API]
  - R2: Check that the volume's status is valid.  It's available so it can be deleted.
  - R2: Tell cinder-volume to delete the volume and return immediately.

  [Cinder-Volume]
  - R2: Volume is deleted and DB entry is deleted
  - R1: Finish the termination of the connection

  [Cinder-API]
  - R1: Now that cinder-volume has finished the termination the code continues
  - R1: Try to modify the volume in the DB
  - R1: DB layer raises VolumeNotFound since the volume has been deleted from the DB
  - R1: VolumeNotFound is converted to HTTP 404 status code which is returned to Nova

  [Nova]
  - R1: Cinder responds with 404 on the attachment delete request
  - R1: Nova leaves the volume as attached, since the attachment delete failed

  At this point the Cinder and Nova DBs are out of sync, because Nova
  thinks that the attachment is connected and Cinder has detached the
  volume and even deleted it.

  **This is caused by a Cinder bug**, but there is some robustification
  work that could be done in Nova, since the volume could be left in a
  "detached from instance" state (since the os-brick call succeeded),
  and a second detach request could directly skip the os-brick call and
  when it sees that the volume or the attachment no longer exists in
  Cinder it can proceed to remove it from the instance's XML.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1937084/+subscriptions