← Back to team overview

yahoo-eng-team team mailing list archive

[Bug 2007864] [NEW] Error nova.exception.VolumeNotFound appears when trying to extend attached volume

 

Public bug reported:

Openstack version: Xena. May have same behaviour in master as code looks
like the same

Error appears when we try to extend volume after volume was migrated to
ceph beckend. Migration is performed by "change volume type" (in
Horizon) or "cinder retype" (in CLI) for attached to running VM cinder
volume.

Main condition is os-vol-mig-status-attr:name_id is set for volume. It
is performed utomatically during migration to ceph beckend (or to NFS
backend - at least using standard NFS driver -
cinder.volume.drivers.nfs.NfsDriver)


Steps to reproduce:

1. Create VM
2. Create additional cinder volume on source backend 
(I tried on NFS, but I think it could be any backend defferent from destination ceph backend. I suppose it could even be other backend on the same ceph cluster - say other pool with other disk type. E.g. if destination ceph beckend is on SSD pool, source backend could be on the same ceph cluster, but on HDD pool. Main thing is that after migration to destination ceph backend property os-vol-mig-status-attr:name_id is set.)
3. Attach created on p.2 volume to VM
4. Issue "change volume type" (in Horizon) or "cinder retype" (in CLI) and provide volume type for destination ceph backend.
5. After migration is finished, check that os-vol-mig-status-attr:name_id is set to volume ID
6. Try to extend volume using "extend volume" context menu on created in p.2 volume in Horizon. Volume will be extended, but on hypervisor where VM from p.1 is hosted error message will appear in nova-compute.log:

2023-02-20 10:26:27.581 6 INFO nova.compute.manager [req-8eb07f8e-2d68-4521-93cc-94f40007f134 884a8cceeadc4bb1b47455fc9d05f6ac 3e3e26b5910e482a97d188c840114d8f - default default] [instance: 169ab8a2-ad38-49ab-b8f4-14217b709fbb] Cinder extended volume 5a11fb14-6bc5-4c3f-914c-567012c74b01; extending it to detect new size
2023-02-20 10:26:27.633 6 WARNING nova.compute.manager [req-8eb07f8e-2d68-4521-93cc-94f40007f134 884a8cceeadc4bb1b47455fc9d05f6ac 3e3e26b5910e482a97d188c840114d8f - default default] [instance: 169ab8a2-ad38-49ab-b8f4-14217b709fbb] Extend volume failed, volume_id=5a11fb14-6bc5-4c3f-914c-567012c74b01, reason: Volume 584eba48-0baf-4a80-8b8d-ddd6ac6266e6 could not be found.: nova.exception.VolumeNotFound: Volume 584eba48-0baf-4a80-8b8d-ddd6ac6266e6 could not be found.
2023-02-20 10:26:27.654 6 ERROR oslo_messaging.rpc.server [req-8eb07f8e-2d68-4521-93cc-94f40007f134 884a8cceeadc4bb1b47455fc9d05f6ac 3e3e26b5910e482a97d188c840114d8f - default default] Exception during message handling: nova.exception.VolumeNotFound: Volume 584eba48-0baf-4a80-8b8d-ddd6ac6266e6 could not be found.
2023-02-20 10:26:27.654 6 ERROR oslo_messaging.rpc.server Traceback (most recent call last):
2023-02-20 10:26:27.654 6 ERROR oslo_messaging.rpc.server   File "/var/lib/kolla/venv/lib/python3.8/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming
2023-02-20 10:26:27.654 6 ERROR oslo_messaging.rpc.server     res = self.dispatcher.dispatch(message)
2023-02-20 10:26:27.654 6 ERROR oslo_messaging.rpc.server   File "/var/lib/kolla/venv/lib/python3.8/site-packages/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch
2023-02-20 10:26:27.654 6 ERROR oslo_messaging.rpc.server     return self._do_dispatch(endpoint, method, ctxt, args)
2023-02-20 10:26:27.654 6 ERROR oslo_messaging.rpc.server   File "/var/lib/kolla/venv/lib/python3.8/site-packages/oslo_messaging/rpc/dispatcher.py", line 229, in _do_dispatch
2023-02-20 10:26:27.654 6 ERROR oslo_messaging.rpc.server     result = func(ctxt, **new_args)
2023-02-20 10:26:27.654 6 ERROR oslo_messaging.rpc.server   File "/var/lib/kolla/venv/lib/python3.8/site-packages/nova/exception_wrapper.py", line 71, in wrapped
2023-02-20 10:26:27.654 6 ERROR oslo_messaging.rpc.server     _emit_versioned_exception_notification(
2023-02-20 10:26:27.654 6 ERROR oslo_messaging.rpc.server   File "/var/lib/kolla/venv/lib/python3.8/site-packages/oslo_utils/excutils.py", line 227, in __exit__
2023-02-20 10:26:27.654 6 ERROR oslo_messaging.rpc.server     self.force_reraise()
2023-02-20 10:26:27.654 6 ERROR oslo_messaging.rpc.server   File "/var/lib/kolla/venv/lib/python3.8/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
2023-02-20 10:26:27.654 6 ERROR oslo_messaging.rpc.server     raise self.value
2023-02-20 10:26:27.654 6 ERROR oslo_messaging.rpc.server   File "/var/lib/kolla/venv/lib/python3.8/site-packages/nova/exception_wrapper.py", line 63, in wrapped
2023-02-20 10:26:27.654 6 ERROR oslo_messaging.rpc.server     return f(self, context, *args, **kw)
2023-02-20 10:26:27.654 6 ERROR oslo_messaging.rpc.server   File "/var/lib/kolla/venv/lib/python3.8/site-packages/nova/compute/manager.py", line 10581, in external_instance_event
2023-02-20 10:26:27.654 6 ERROR oslo_messaging.rpc.server     self.extend_volume(context, instance, event.tag)
2023-02-20 10:26:27.654 6 ERROR oslo_messaging.rpc.server   File "/var/lib/kolla/venv/lib/python3.8/site-packages/nova/compute/utils.py", line 1433, in decorated_function
2023-02-20 10:26:27.654 6 ERROR oslo_messaging.rpc.server     return function(self, context, *args, **kwargs)
2023-02-20 10:26:27.654 6 ERROR oslo_messaging.rpc.server   File "/var/lib/kolla/venv/lib/python3.8/site-packages/nova/compute/manager.py", line 211, in decorated_function
2023-02-20 10:26:27.654 6 ERROR oslo_messaging.rpc.server     compute_utils.add_instance_fault_from_exc(context,
2023-02-20 10:26:27.654 6 ERROR oslo_messaging.rpc.server   File "/var/lib/kolla/venv/lib/python3.8/site-packages/oslo_utils/excutils.py", line 227, in __exit__
2023-02-20 10:26:27.654 6 ERROR oslo_messaging.rpc.server     self.force_reraise()
2023-02-20 10:26:27.654 6 ERROR oslo_messaging.rpc.server   File "/var/lib/kolla/venv/lib/python3.8/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
2023-02-20 10:26:27.654 6 ERROR oslo_messaging.rpc.server     raise self.value
2023-02-20 10:26:27.654 6 ERROR oslo_messaging.rpc.server   File "/var/lib/kolla/venv/lib/python3.8/site-packages/nova/compute/manager.py", line 200, in decorated_function
2023-02-20 10:26:27.654 6 ERROR oslo_messaging.rpc.server     return function(self, context, *args, **kwargs)
2023-02-20 10:26:27.654 6 ERROR oslo_messaging.rpc.server   File "/var/lib/kolla/venv/lib/python3.8/site-packages/nova/compute/manager.py", line 10438, in extend_volume
2023-02-20 10:26:27.654 6 ERROR oslo_messaging.rpc.server     self.driver.extend_volume(context, connection_info, instance,
2023-02-20 10:26:27.654 6 ERROR oslo_messaging.rpc.server   File "/var/lib/kolla/venv/lib/python3.8/site-packages/nova/virt/libvirt/driver.py", line 2705, in extend_volume
2023-02-20 10:26:27.654 6 ERROR oslo_messaging.rpc.server     raise exception.VolumeNotFound(volume_id=volume_id)
2023-02-20 10:26:27.654 6 ERROR oslo_messaging.rpc.server nova.exception.VolumeNotFound: Volume 584eba48-0baf-4a80-8b8d-ddd6ac6266e6 could not be found.
2023-02-20 10:26:27.654 6 ERROR oslo_messaging.rpc.server

Looks like error happens here:
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L2822

Migrated volume has the following connection_info:

{'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-584eba48-0baf-4a80-8b8d-ddd6ac6266e6', 
 'hosts': ['x.x.x.x', 'x.x.x.x', 'x.x.x.x'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'cinder', 
 'secret_type': 'ceph', 'secret_uuid': 'xxx', 'volume_id': '584eba48-0baf-4a80-8b8d-ddd6ac6266e6', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 
 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '169ab8a2-ad38-49ab-b8f4-14217b709fbb', 'attached_at': '', 'detached_at': '', 'volume_id': '584eba48-0baf-4a80-8b8d-ddd6ac6266e6', 
 'serial': '5a11fb14-6bc5-4c3f-914c-567012c74b01'}

Volume serial number is the same as it was before migration (it is the
same in xml either), but volume_id in connection_info is new one and it
is equal to id in os-vol-mig-status-attr:name_id field of the volume.

So serial number for volume, returned by guest.get_all_disks() doesn't
mach volume_id and it cause an exeption  raise
exception.VolumeNotFound(volume_id=volume_id) as disk variable is set to
None

** Affects: nova
     Importance: Undecided
         Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2007864

Title:
  Error nova.exception.VolumeNotFound appears when trying to extend
  attached volume

Status in OpenStack Compute (nova):
  New

Bug description:
  Openstack version: Xena. May have same behaviour in master as code
  looks like the same

  Error appears when we try to extend volume after volume was migrated
  to ceph beckend. Migration is performed by "change volume type" (in
  Horizon) or "cinder retype" (in CLI) for attached to running VM cinder
  volume.

  Main condition is os-vol-mig-status-attr:name_id is set for volume. It
  is performed utomatically during migration to ceph beckend (or to NFS
  backend - at least using standard NFS driver -
  cinder.volume.drivers.nfs.NfsDriver)

  
  Steps to reproduce:

  1. Create VM
  2. Create additional cinder volume on source backend 
  (I tried on NFS, but I think it could be any backend defferent from destination ceph backend. I suppose it could even be other backend on the same ceph cluster - say other pool with other disk type. E.g. if destination ceph beckend is on SSD pool, source backend could be on the same ceph cluster, but on HDD pool. Main thing is that after migration to destination ceph backend property os-vol-mig-status-attr:name_id is set.)
  3. Attach created on p.2 volume to VM
  4. Issue "change volume type" (in Horizon) or "cinder retype" (in CLI) and provide volume type for destination ceph backend.
  5. After migration is finished, check that os-vol-mig-status-attr:name_id is set to volume ID
  6. Try to extend volume using "extend volume" context menu on created in p.2 volume in Horizon. Volume will be extended, but on hypervisor where VM from p.1 is hosted error message will appear in nova-compute.log:

  2023-02-20 10:26:27.581 6 INFO nova.compute.manager [req-8eb07f8e-2d68-4521-93cc-94f40007f134 884a8cceeadc4bb1b47455fc9d05f6ac 3e3e26b5910e482a97d188c840114d8f - default default] [instance: 169ab8a2-ad38-49ab-b8f4-14217b709fbb] Cinder extended volume 5a11fb14-6bc5-4c3f-914c-567012c74b01; extending it to detect new size
  2023-02-20 10:26:27.633 6 WARNING nova.compute.manager [req-8eb07f8e-2d68-4521-93cc-94f40007f134 884a8cceeadc4bb1b47455fc9d05f6ac 3e3e26b5910e482a97d188c840114d8f - default default] [instance: 169ab8a2-ad38-49ab-b8f4-14217b709fbb] Extend volume failed, volume_id=5a11fb14-6bc5-4c3f-914c-567012c74b01, reason: Volume 584eba48-0baf-4a80-8b8d-ddd6ac6266e6 could not be found.: nova.exception.VolumeNotFound: Volume 584eba48-0baf-4a80-8b8d-ddd6ac6266e6 could not be found.
  2023-02-20 10:26:27.654 6 ERROR oslo_messaging.rpc.server [req-8eb07f8e-2d68-4521-93cc-94f40007f134 884a8cceeadc4bb1b47455fc9d05f6ac 3e3e26b5910e482a97d188c840114d8f - default default] Exception during message handling: nova.exception.VolumeNotFound: Volume 584eba48-0baf-4a80-8b8d-ddd6ac6266e6 could not be found.
  2023-02-20 10:26:27.654 6 ERROR oslo_messaging.rpc.server Traceback (most recent call last):
  2023-02-20 10:26:27.654 6 ERROR oslo_messaging.rpc.server   File "/var/lib/kolla/venv/lib/python3.8/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming
  2023-02-20 10:26:27.654 6 ERROR oslo_messaging.rpc.server     res = self.dispatcher.dispatch(message)
  2023-02-20 10:26:27.654 6 ERROR oslo_messaging.rpc.server   File "/var/lib/kolla/venv/lib/python3.8/site-packages/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch
  2023-02-20 10:26:27.654 6 ERROR oslo_messaging.rpc.server     return self._do_dispatch(endpoint, method, ctxt, args)
  2023-02-20 10:26:27.654 6 ERROR oslo_messaging.rpc.server   File "/var/lib/kolla/venv/lib/python3.8/site-packages/oslo_messaging/rpc/dispatcher.py", line 229, in _do_dispatch
  2023-02-20 10:26:27.654 6 ERROR oslo_messaging.rpc.server     result = func(ctxt, **new_args)
  2023-02-20 10:26:27.654 6 ERROR oslo_messaging.rpc.server   File "/var/lib/kolla/venv/lib/python3.8/site-packages/nova/exception_wrapper.py", line 71, in wrapped
  2023-02-20 10:26:27.654 6 ERROR oslo_messaging.rpc.server     _emit_versioned_exception_notification(
  2023-02-20 10:26:27.654 6 ERROR oslo_messaging.rpc.server   File "/var/lib/kolla/venv/lib/python3.8/site-packages/oslo_utils/excutils.py", line 227, in __exit__
  2023-02-20 10:26:27.654 6 ERROR oslo_messaging.rpc.server     self.force_reraise()
  2023-02-20 10:26:27.654 6 ERROR oslo_messaging.rpc.server   File "/var/lib/kolla/venv/lib/python3.8/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
  2023-02-20 10:26:27.654 6 ERROR oslo_messaging.rpc.server     raise self.value
  2023-02-20 10:26:27.654 6 ERROR oslo_messaging.rpc.server   File "/var/lib/kolla/venv/lib/python3.8/site-packages/nova/exception_wrapper.py", line 63, in wrapped
  2023-02-20 10:26:27.654 6 ERROR oslo_messaging.rpc.server     return f(self, context, *args, **kw)
  2023-02-20 10:26:27.654 6 ERROR oslo_messaging.rpc.server   File "/var/lib/kolla/venv/lib/python3.8/site-packages/nova/compute/manager.py", line 10581, in external_instance_event
  2023-02-20 10:26:27.654 6 ERROR oslo_messaging.rpc.server     self.extend_volume(context, instance, event.tag)
  2023-02-20 10:26:27.654 6 ERROR oslo_messaging.rpc.server   File "/var/lib/kolla/venv/lib/python3.8/site-packages/nova/compute/utils.py", line 1433, in decorated_function
  2023-02-20 10:26:27.654 6 ERROR oslo_messaging.rpc.server     return function(self, context, *args, **kwargs)
  2023-02-20 10:26:27.654 6 ERROR oslo_messaging.rpc.server   File "/var/lib/kolla/venv/lib/python3.8/site-packages/nova/compute/manager.py", line 211, in decorated_function
  2023-02-20 10:26:27.654 6 ERROR oslo_messaging.rpc.server     compute_utils.add_instance_fault_from_exc(context,
  2023-02-20 10:26:27.654 6 ERROR oslo_messaging.rpc.server   File "/var/lib/kolla/venv/lib/python3.8/site-packages/oslo_utils/excutils.py", line 227, in __exit__
  2023-02-20 10:26:27.654 6 ERROR oslo_messaging.rpc.server     self.force_reraise()
  2023-02-20 10:26:27.654 6 ERROR oslo_messaging.rpc.server   File "/var/lib/kolla/venv/lib/python3.8/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
  2023-02-20 10:26:27.654 6 ERROR oslo_messaging.rpc.server     raise self.value
  2023-02-20 10:26:27.654 6 ERROR oslo_messaging.rpc.server   File "/var/lib/kolla/venv/lib/python3.8/site-packages/nova/compute/manager.py", line 200, in decorated_function
  2023-02-20 10:26:27.654 6 ERROR oslo_messaging.rpc.server     return function(self, context, *args, **kwargs)
  2023-02-20 10:26:27.654 6 ERROR oslo_messaging.rpc.server   File "/var/lib/kolla/venv/lib/python3.8/site-packages/nova/compute/manager.py", line 10438, in extend_volume
  2023-02-20 10:26:27.654 6 ERROR oslo_messaging.rpc.server     self.driver.extend_volume(context, connection_info, instance,
  2023-02-20 10:26:27.654 6 ERROR oslo_messaging.rpc.server   File "/var/lib/kolla/venv/lib/python3.8/site-packages/nova/virt/libvirt/driver.py", line 2705, in extend_volume
  2023-02-20 10:26:27.654 6 ERROR oslo_messaging.rpc.server     raise exception.VolumeNotFound(volume_id=volume_id)
  2023-02-20 10:26:27.654 6 ERROR oslo_messaging.rpc.server nova.exception.VolumeNotFound: Volume 584eba48-0baf-4a80-8b8d-ddd6ac6266e6 could not be found.
  2023-02-20 10:26:27.654 6 ERROR oslo_messaging.rpc.server

  Looks like error happens here:
  https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L2822

  Migrated volume has the following connection_info:

  {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-584eba48-0baf-4a80-8b8d-ddd6ac6266e6', 
   'hosts': ['x.x.x.x', 'x.x.x.x', 'x.x.x.x'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'cinder', 
   'secret_type': 'ceph', 'secret_uuid': 'xxx', 'volume_id': '584eba48-0baf-4a80-8b8d-ddd6ac6266e6', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 
   'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '169ab8a2-ad38-49ab-b8f4-14217b709fbb', 'attached_at': '', 'detached_at': '', 'volume_id': '584eba48-0baf-4a80-8b8d-ddd6ac6266e6', 
   'serial': '5a11fb14-6bc5-4c3f-914c-567012c74b01'}

  Volume serial number is the same as it was before migration (it is the
  same in xml either), but volume_id in connection_info is new one and
  it is equal to id in os-vol-mig-status-attr:name_id field of the
  volume.

  So serial number for volume, returned by guest.get_all_disks() doesn't
  mach volume_id and it cause an exeption  raise
  exception.VolumeNotFound(volume_id=volume_id) as disk variable is set
  to None

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/2007864/+subscriptions



Follow ups