← Back to team overview

yahoo-eng-team team mailing list archive

[Bug 1998740] [NEW] refresh attachment error when bdm.attachment_id lost

 

Public bug reported:

Description
===========
I cherry-picked the Xena patch for using in wallaby, https://blueprints.launchpad.net/nova/+spec/nova-manage-refresh-connection-info, and an unexpected error occurred.
My OpenStack was wallaby upgraded from Newton, I needed to refresh volume_attachment of Newton version instances, since the bdm table field attachment_id was added in Pike, so I executed the command to the old instance in Wallaby got the error was that could not find the volume attachment by attachment id in bdm.

Steps to reproduce
==================
nova-manage refresh volume_attachment instance_id volume_id connector.json

Expected result
===============
Deleted and created volume attachment successfully.

Actual result
=============
Actually recreated a new volume_attachment but the old one did not be deleted.

Environment
===========
Wallaby upgraded from Newton, hypervisor: Libvirt+KVM

Logs & Configs
===========
Select the database and see two ACTIVE volume attachment for the same instance exist:

MariaDB [(none)]> use cinder
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
MariaDB [cinder]> select * from volume_attachment where instance_uuid='a30253a2-6ca9-4e7c-9516-e92be85e08b5' and deleted=0\G;
*************************** 1. row ***************************
     created_at: 2022-08-30 05:50:56
     updated_at: 2022-08-30 05:50:56
     deleted_at: NULL
        deleted: 0
             id: 984dea0a-dd46-4552-b1ae-c10622a148db
      volume_id: fb4e520c-92d3-4db0-8d5c-e325639f2ab3
  attached_host: NULL
  instance_uuid: a30253a2-6ca9-4e7c-9516-e92be85e08b5
     mountpoint: /dev/vda
    attach_time: 2022-08-30 05:50:56
    detach_time: NULL
    attach_mode: rw
  attach_status: attached
connection_info: NULL
      connector: NULL
*************************** 2. row ***************************
     created_at: 2022-12-05 05:29:24
     updated_at: 2022-12-05 05:29:26
     deleted_at: NULL
        deleted: 0
             id: a81580cd-639a-40e3-8fc6-8e45060abd6e
      volume_id: fb4e520c-92d3-4db0-8d5c-e325639f2ab3
  attached_host: nova-maintenance-84965684b-sh4w9
  instance_uuid: a30253a2-6ca9-4e7c-9516-e92be85e08b5
     mountpoint: na
    attach_time: 2022-12-05 05:29:26
    detach_time: NULL
    attach_mode: ro
  attach_status: attached
connection_info: {"name": "volumes/fb4e520c-92d3-4db0-8d5c-e325639f2ab3", "hosts": ["192.66.30.2", "192.66.30.3", "192.66.30.4"], "ports": ["6789", "6789", "6789"], "cluster_name": "ceph", "auth_enabled": true, "auth_username": "cinder", "secret_type": "ceph", "secret_uuid": null, "volume_id": "fb4e520c-92d3-4db0-8d5c-e325639f2ab3", "discard": true, "qos_specs": null, "access_mode": "rw", "encrypted": false, "cacheable": false, "driver_volume_type": "rbd", "attachment_id": "a81580cd-639a-40e3-8fc6-8e45060abd6e"}
      connector: {"platform": "x86_64", "os_type": "linux", "ip": "10.232.0.100", "host": "nova-maintenance-84965684b-sh4w9", "multipath": false, "initiator": "iqn.1994-05.com.redhat:28dc54b72a45", "do_local_attach": false, "system uuid": "970f5018-4873-04b8-e611-b6cf9ef92b91", "mode": "ro"}
2 rows in set (0.01 sec)

ERROR: No query specified

MariaDB [cinder]>

** Affects: nova
     Importance: Undecided
         Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1998740

Title:
  refresh attachment error when bdm.attachment_id lost

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===========
  I cherry-picked the Xena patch for using in wallaby, https://blueprints.launchpad.net/nova/+spec/nova-manage-refresh-connection-info, and an unexpected error occurred.
  My OpenStack was wallaby upgraded from Newton, I needed to refresh volume_attachment of Newton version instances, since the bdm table field attachment_id was added in Pike, so I executed the command to the old instance in Wallaby got the error was that could not find the volume attachment by attachment id in bdm.

  Steps to reproduce
  ==================
  nova-manage refresh volume_attachment instance_id volume_id connector.json

  Expected result
  ===============
  Deleted and created volume attachment successfully.

  Actual result
  =============
  Actually recreated a new volume_attachment but the old one did not be deleted.

  Environment
  ===========
  Wallaby upgraded from Newton, hypervisor: Libvirt+KVM

  Logs & Configs
  ===========
  Select the database and see two ACTIVE volume attachment for the same instance exist:

  MariaDB [(none)]> use cinder
  Reading table information for completion of table and column names
  You can turn off this feature to get a quicker startup with -A

  Database changed
  MariaDB [cinder]> select * from volume_attachment where instance_uuid='a30253a2-6ca9-4e7c-9516-e92be85e08b5' and deleted=0\G;
  *************************** 1. row ***************************
       created_at: 2022-08-30 05:50:56
       updated_at: 2022-08-30 05:50:56
       deleted_at: NULL
          deleted: 0
               id: 984dea0a-dd46-4552-b1ae-c10622a148db
        volume_id: fb4e520c-92d3-4db0-8d5c-e325639f2ab3
    attached_host: NULL
    instance_uuid: a30253a2-6ca9-4e7c-9516-e92be85e08b5
       mountpoint: /dev/vda
      attach_time: 2022-08-30 05:50:56
      detach_time: NULL
      attach_mode: rw
    attach_status: attached
  connection_info: NULL
        connector: NULL
  *************************** 2. row ***************************
       created_at: 2022-12-05 05:29:24
       updated_at: 2022-12-05 05:29:26
       deleted_at: NULL
          deleted: 0
               id: a81580cd-639a-40e3-8fc6-8e45060abd6e
        volume_id: fb4e520c-92d3-4db0-8d5c-e325639f2ab3
    attached_host: nova-maintenance-84965684b-sh4w9
    instance_uuid: a30253a2-6ca9-4e7c-9516-e92be85e08b5
       mountpoint: na
      attach_time: 2022-12-05 05:29:26
      detach_time: NULL
      attach_mode: ro
    attach_status: attached
  connection_info: {"name": "volumes/fb4e520c-92d3-4db0-8d5c-e325639f2ab3", "hosts": ["192.66.30.2", "192.66.30.3", "192.66.30.4"], "ports": ["6789", "6789", "6789"], "cluster_name": "ceph", "auth_enabled": true, "auth_username": "cinder", "secret_type": "ceph", "secret_uuid": null, "volume_id": "fb4e520c-92d3-4db0-8d5c-e325639f2ab3", "discard": true, "qos_specs": null, "access_mode": "rw", "encrypted": false, "cacheable": false, "driver_volume_type": "rbd", "attachment_id": "a81580cd-639a-40e3-8fc6-8e45060abd6e"}
        connector: {"platform": "x86_64", "os_type": "linux", "ip": "10.232.0.100", "host": "nova-maintenance-84965684b-sh4w9", "multipath": false, "initiator": "iqn.1994-05.com.redhat:28dc54b72a45", "do_local_attach": false, "system uuid": "970f5018-4873-04b8-e611-b6cf9ef92b91", "mode": "ro"}
  2 rows in set (0.01 sec)

  ERROR: No query specified

  MariaDB [cinder]>

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1998740/+subscriptions



Follow ups