← Back to team overview

yahoo-eng-team team mailing list archive

[Bug 1869050] Re: migration of anti-affinity server fails due to stale scheduler instance info

 

The problem cannot be reproduced on stable/queens. The rocky patch [1]
changed the logic of the affinity filter to use the host_state. As the
host_state could be stale we have this bug since rocky. But on queens
the filter queries the instance.host from the database and that
information is up to date after the migration therefore the bug is not
reproducible any more.


[1] https://review.opendev.org/#/c/571166/27/nova/scheduler/filters/affinity_filter.py@101

** Changed in: nova/queens
       Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1869050

Title:
  migration of anti-affinity server fails due to stale scheduler
  instance info

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) pike series:
  Triaged
Status in OpenStack Compute (nova) queens series:
  Invalid
Status in OpenStack Compute (nova) rocky series:
  Fix Committed
Status in OpenStack Compute (nova) stein series:
  Fix Committed
Status in OpenStack Compute (nova) train series:
  Fix Committed
Status in OpenStack Compute (nova) ussuri series:
  Fix Committed

Bug description:
  Description
  ===========

  
  Steps to reproduce
  ==================
  Have a deployment with 3 compute nodes

  * make sure that the deployment is configured with tracks_instance_changes=True (True is the default)
  * create and server group with anti-affinity policy
  * boot server1 into the group
  * boot server2 into the group
  * migrate server2
  * confirm the migration
  * boot server3

  Make sure that between the last two steps there was no periodic
  _sync_scheduler_instance_info running on the compute that was hosted
  server2 before the migration. This could done by doing the last too
  steps after each other without waiting too much as interval of that
  periodic (scheduler_instance_sync_interval) is defaulted to 120 sec.

  Expected result
  ===============
  server3 is booted on the host where server2 is moved away

  Actual result
  =============
  server3 cannot be booted (NoValidHost)

  Triage
  ======

  The confirm resize call on the source compute does not update the
  scheduler that the instance is removed from this host. This makes the
  scheduler instance info stale and causing the subsequent scheduling
  error.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1869050/+subscriptions


References