sts-sponsors team mailing list archive
-
sts-sponsors team
-
Mailing list archive
-
Message #06823
[Bug 1994002] Re: [SRU] migration was active, but no RAM info was set
This bug was fixed in the package qemu - 1:6.2+dfsg-2ubuntu6.7
---------------
qemu (1:6.2+dfsg-2ubuntu6.7) jammy; urgency=medium
[ Brett Milford ]
* d/p/u/lp1994002-migration-Read-state-once.patch: Fix for libvirt
error 'migration was active, but no RAM info was set' (LP: #1994002)
[ Mauricio Faria de Oliveira ]
* d/p/u/lp2009048-vfio_map_dma_einval_amd_iommu_1tb.patch: Add hint
to VFIO_MAP_DMA error on AMD IOMMU for VMs with ~1TB+ RAM (LP: #2009048)
* d/rules: move "Disable LTO on non-amd64" before buildflags.mk on Jammy.
[ Michal Maloszewski ]
* d/rules: Disable LTO on non-amd 64 architectures to prevent QEMU
coroutines from failing (LP: #1921664)
-- Mauricio Faria de Oliveira <mfo@xxxxxxxxxxxxx> Mon, 06 Mar 2023
17:00:46 -0300
** Changed in: qemu (Ubuntu Jammy)
Status: Fix Committed => Fix Released
** Changed in: qemu (Ubuntu Focal)
Status: Fix Committed => Fix Released
--
You received this bug notification because you are a member of SE
("STS") Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1994002
Title:
[SRU] migration was active, but no RAM info was set
Status in Ubuntu Cloud Archive:
New
Status in Ubuntu Cloud Archive ussuri series:
New
Status in qemu package in Ubuntu:
Fix Released
Status in qemu source package in Bionic:
Fix Released
Status in qemu source package in Focal:
Fix Released
Status in qemu source package in Jammy:
Fix Released
Status in qemu source package in Kinetic:
Fix Released
Bug description:
[Impact]
* While live-migrating many instances concurrently, libvirt sometimes
return `internal error: migration was active, but no RAM info was
set:`
* Effects of this bug are mostly observed in large scale clusters
with a lot of live migration activity.
* Has second order effects for consumers of migration monitor such as
libvirt and openstack.
[Test Case]
Synthetic reproducer with GDB in comment #21.
Steps to Reproduce:
1. live evacuate a compute
2. live migration of one or more instances fails with the above error
N.B Due to the nature of this bug it is difficult consistently reproduce.
In an environment where it has been observed it is estimated to occur approximately 1/1000 migrations.
[Where problems could occur]
* In the event of a regression the migration monitor may report an inconsistent state.
[Original Bug Description]
While live-migrating many instances concurrently, libvirt sometimes return internal error: migration was active, but no RAM info was set:
~~~
2022-03-30 06:08:37.197 7 WARNING nova.virt.libvirt.driver [req-5c3296cf-88ee-4af6-ae6a-ddba99935e23 - - - - -] [instance: af339c99-1182-4489-b15c-21e52f50f724] Error monitoring migration: internal error: migration was active, but no RAM info was set: libvirt.libvirtError: internal error: migration was active, but no RAM info was set
~~~
From upstream bug: https://bugzilla.redhat.com/show_bug.cgi?id=2074205
[Other Information]
Related bug: https://bugs.launchpad.net/nova/+bug/1982284
To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1994002/+subscriptions