← Back to team overview

yahoo-eng-team team mailing list archive

[Bug 1806064] Re: Volume remains in attaching/reserved status, if the instance is deleted after TooManyInstances exception in nova-conductor

 

It should be relatively easy to write a functional regression test
similar to
https://review.openstack.org/#/c/545123/5/nova/tests/functional/wsgi/test_servers.py
but for this scenario.

** Changed in: nova
       Status: New => Triaged

** Changed in: nova
   Importance: Undecided => Medium

** Also affects: nova/queens
   Importance: Undecided
       Status: New

** Also affects: nova/pike
   Importance: Undecided
       Status: New

** Also affects: nova/rocky
   Importance: Undecided
       Status: New

** Changed in: nova/pike
       Status: New => Triaged

** Changed in: nova/queens
       Status: New => Triaged

** Changed in: nova/queens
   Importance: Undecided => Medium

** Changed in: nova/rocky
       Status: New => Triaged

** Changed in: nova/rocky
   Importance: Undecided => Medium

** Changed in: nova/pike
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1806064

Title:
  Volume remains in attaching/reserved status, if the instance is
  deleted after TooManyInstances exception in nova-conductor

Status in OpenStack Compute (nova):
  Triaged
Status in OpenStack Compute (nova) pike series:
  Triaged
Status in OpenStack Compute (nova) queens series:
  Triaged
Status in OpenStack Compute (nova) rocky series:
  Triaged

Bug description:
  If a number of instances are booted from volumes in parallel and some
  of the build requests failed in nova-conductor with exception
  TooManyInstances [1] because of the setting quota.recheck_quota=True
  being set in nova.conf, some instances will end up in the ERROR state.

  If we delete this instances, their volumes will remain in
  attaching(Pike)/reserved(Queens) state.

  This bug is related to https://bugs.launchpad.net/nova/+bug/1404867

  Steps to reproduce:

  0. Set quota.recheck_quota=True, start several nova-conductors.

  1. Set VCPU quota limits for the project to 1.

  2. Create two instances with 1 VCPU in parallel.

  3. One of this instances will be created and one will end up in the
  ERROR state. Or both of them will be in ERROR state.

  4. Delete instances.

  5. Volumes from errored instances will not be available, they can't be
  attached, they can't be deleted without permision in
  volume:force_delete cinder policy.

  This bug exists at least in Pike (7ff1b28) and Queens (c5fe051).

  ---
  [1] https://github.com/openstack/nova/blob/stable/rocky/nova/conductor/manager.py#L1308

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1806064/+subscriptions


References