← Back to team overview

kernel-packages team mailing list archive

[Bug 1542941] Re: Regression: problems migrating recent wily/vivid Xen VMs due to memory hotplug fix


** Also affects: linux (Ubuntu Wily)
   Importance: Undecided
       Status: New

** Also affects: linux (Ubuntu Vivid)
   Importance: Undecided
       Status: New

** Changed in: linux (Ubuntu Vivid)
       Status: New => Fix Committed

** Changed in: linux (Ubuntu Wily)
       Status: New => Fix Committed

You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.

  Regression: problems migrating recent wily/vivid Xen VMs due to memory
  hotplug fix

Status in linux package in Ubuntu:
Status in linux source package in Vivid:
  Fix Committed
Status in linux source package in Wily:
  Fix Committed

Bug description:
  Commit 633d6f17cd91ad5bf2370265946f716e42d388c6 (aka
  38d30afb12140c0e3a446fe779dc9cd29548f313 in vivid) in Xen domU causes
  high resource requirements in the underlying target dom0 (migrating a
  64-bit domU involves a 1GB malloc in dom0, as well as a lot of
  unnecessary work).

  In my specific case, that 1GB malloc fails as my dom0s aren't big
  enough, causing all migrations to fail with the migrating VM

  This is fixed in

  x86/xen/p2m: hint at the last populated P2M entry"

  With commit 633d6f17cd91ad5bf2370265946f716e42d388c6 (x86/xen: prepare
  p2m list for memory hotplug) the P2M may be sized to accomdate a much
  larger amount of memory than the domain currently has.

  When saving a domain, the toolstack must scan all the P2M looking for
  populated pages.  This results in a performance regression due to the
  unnecessary scanning.

  Instead of reporting (via shared_info) the maximum possible size of
  the P2M, hint at the last PFN which might be populated.  This hint is
  increased as new leaves are added to the P2M (in the expectation that
  they will be used for populated entries).

  Signed-off-by: David Vrabel <david.vrabel@xxxxxxxxxx>
  Cc: <stable@xxxxxxxxxxxxxxx> # 4.0+

To manage notifications about this bug go to: