yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #96589
[Bug 2126799] Re: VM evacuation cause corruption on VM staying on the host
** No longer affects: charm-cinder-purestorage
** No longer affects: charm-nova-compute
** Also affects: cinder
Importance: Undecided
Status: New
** No longer affects: cinder (Ubuntu)
** Also affects: nova
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2126799
Title:
VM evacuation cause corruption on VM staying on the host
Status in Cinder:
New
Status in OpenStack Compute (nova):
New
Bug description:
Hi,
If we try to force compute service nova-compute down and do an
evacuate of a VM, the first VM (lower instance ID) get corrupted and
the nova-compute service is broken and required a local service
restart.
Steps to reproduce:
1. openstack comupte service --down --disable host1.openstack.example.com nova-compute
2. openstack server evacute vm-1
3. openstack compute service --enable --up host1.openstack.example.com nova-compute (Will fails)
4.1 Login to the compute.
4.2 restart nova-compute (systemctl restart nova-compute)
5. openstack compute service --up host1.oepnstack.example.com nova-compute (works)
6. First VM broke, logs in nova-compute showing nova-compute trying to remove the mpath device and failing.
7. Rebooting the VM and/or doing a fsck on the root volume normally fix the issue.
Environment info:
Charmed Openstack version 2024.1/stable
Cinder purestorage driver charm
OS: Jammy
Purestorage array is set in safe mode.
Errors in systemctl status nova-compute:
Oct 03 17:41:27 host1.openstac.example.com nova-compute[3608823]: 2025-10-03 17:41:27.782 3608823 WARNING nova.compute.manager [req-6c74a25e-1a65-45bd-b130-cbcd4955e551 req-a89ba327-0f1a-4684-9f56-e152ef8f6360 a477680e8a034b2d8fbe072e53fc871b ef5d0d02364e412c8213b6b7a114c494>
Oct 03 17:55:39 host1.openstac.example.com nova-compute[3608823]: 2025-10-03 17:55:39.468 3608823 WARNING nova.compute.manager [req-58736a0d-ebef-461d-a7d6-d7ae41362fd2 req-9e14566f-06c7-4ad3-ac0e-7e530f8de97c a477680e8a034b2d8fbe072e53fc871b ef5d0d02364e412c8213b6b7a114c494>
Oct 03 17:55:39 host1.openstac.example.com nova-compute[3608823]: 2025-10-03 17:55:39.855 3608823 WARNING nova.compute.manager [req-ee3365a7-c989-4ce4-a705-7a3d85411dbc req-50378032-4806-43f5-9941-f4473b3d3207 a477680e8a034b2d8fbe072e53fc871b ef5d0d02364e412c8213b6b7a114c494>
Oct 03 17:55:41 host1.openstac.example.com nova-compute[3608823]: 2025-10-03 17:55:41.508 3608823 WARNING nova.compute.manager [req-6aa42f02-8e0a-4c7e-b8ba-9a94aef12acb req-9428e314-6483-432e-a861-373804094828 a477680e8a034b2d8fbe072e53fc871b ef5d0d02364e412c8213b6b7a114c494>
Oct 03 17:55:43 host1.openstac.example.com nova-compute[3608823]: 2025-10-03 17:55:43.125 3608823 WARNING nova.compute.manager [req-6aa42f02-8e0a-4c7e-b8ba-9a94aef12acb req-9428e314-6483-432e-a861-373804094828 a477680e8a034b2d8fbe072e53fc871b ef5d0d02364e412c8213b6b7a114c494>
Oct 03 17:55:46 host1.openstac.example.com nova-compute[3608823]: 2025-10-03 17:55:46.376 3608823 WARNING nova.compute.manager [req-9457a653-aeff-439b-a6ae-72417bf1269f req-0ec9a58b-c951-4b5c-befe-92f83136092e a477680e8a034b2d8fbe072e53fc871b ef5d0d02364e412c8213b6b7a114c494>
Oct 03 17:56:24 host1.openstac.example.com nova-compute[3608823]: 2025-10-03 17:56:24.218 3608823 WARNING nova.compute.resource_tracker [None req-c53963de-a5bd-4512-91e8-c7a0eab6f5b1 - - - - - -] Instance 65069b01-cdda-4e7b-8c71-a3131573acce has been moved to another host vr>
Oct 03 17:57:25 host1.openstac.example.com nova-compute[3608823]: 2025-10-03 17:57:25.220 3608823 WARNING nova.compute.resource_tracker [None req-c53963de-a5bd-4512-91e8-c7a0eab6f5b1 - - - - - -] Instance 65069b01-cdda-4e7b-8c71-a3131573acce has been moved to another host vr>
Oct 03 17:57:25 host1.openstac.example.com nova-compute[3608823]: 2025-10-03 17:57:25.306 3608823 ERROR nova.scheduler.client.report [None req-c53963de-a5bd-4512-91e8-c7a0eab6f5b1 - - - - - -] [req-9a6ad235-3d19-4051-8490-81c06f394322] Failed to update traits to [COMPUTE_NET>
Oct 03 17:58:27 host1.openstac.example.com nova-compute[3608823]: 2025-10-03 17:58:27.217 3608823 WARNING nova.compute.resource_tracker [None req-c53963de-a5bd-4512-91e8-c7a0eab6f5b1 - - - - - -] Instance 65069b01-cdda-4e7b-8c71-a3131573acce has been moved to another host vr>
Error in nova-compute.log:
2025-10-03 17:34:40.177 3659388 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
2025-10-03 17:57:25.306 3608823 ERROR nova.scheduler.client.report [None req-c53963de-a5bd-4512-91e8-c7a0eab6f5b1 - - - - - -] [req-9a6ad235-3d19-4051-8490-81c06f394322] Failed to update traits to [COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_MMX,COMPUTE_ACCELERATORS,COMPUTE_RES
CUE_BFV,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_QXL,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NODE,HW_CPU_X86_SSSE3,COMPUTE_NET_ATTACH_INTERFACE,CO
MPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_VMVGA,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AMD_SVM,HW_CPU_HYPERTHREADING,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E10
00,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_BMI,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_USB,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_AVX,HW_CPU_X86_SSE42,
COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE,HW_CPU_X86_SSE41,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_QCOW2,
HW_CPU_X86_SSE4A,COMPUTE_NET_VIRTIO_PACKED,HW_CPU_X86_SSE2,HW_CPU_X86_BMI2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SVM,HW_CPU_X86_SHA,HW_CPU_X86_ABM,COMPUTE_VOLUME_MULTI_ATTACH
] for resource provider with UUID 01681782-536a-4c13-98e0-ee1fd4366806. Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n Resource provider's generation already changed. Please update the gen
eration and try again. ", "request_id": "req-9a6ad235-3d19-4051-8490-81c06f394322"}]}
2025-10-03 18:00:09.641 762056 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none
2025-10-03 18:01:10.175 761876 WARNING os_brick.exception [None req-d07b6c0b-4bca-44f9-a91d-93bee1730942 - - - - - -] Flushing mpathk failed: oslo_concurrency.processutils.ProcessExecutionError: Unexpected error while running command.
To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/2126799/+subscriptions