← Back to team overview

yahoo-eng-team team mailing list archive

[Bug 1709985] [NEW] test_rebuild_server_in_error_state randomly times out waiting for rebuilding instance to be active

 

Public bug reported:

http://logs.openstack.org/12/491012/12/check/gate-tempest-dsvm-cells-
ubuntu-xenial/4aa3da8/console.html#_2017-08-10_18_58_35_158151

2017-08-10 18:58:35.158151 | tempest.api.compute.admin.test_servers.ServersAdminTestJSON.test_rebuild_server_in_error_state[id-682cb127-e5bb-4f53-87ce-cb9003604442]
2017-08-10 18:58:35.158207 | ---------------------------------------------------------------------------------------------------------------------------------------
2017-08-10 18:58:35.158221 | 
2017-08-10 18:58:35.158239 | Captured traceback:
2017-08-10 18:58:35.158258 | ~~~~~~~~~~~~~~~~~~~
2017-08-10 18:58:35.158281 |     Traceback (most recent call last):
2017-08-10 18:58:35.158323 |       File "tempest/api/compute/admin/test_servers.py", line 188, in test_rebuild_server_in_error_state
2017-08-10 18:58:35.158346 |         raise_on_error=False)
2017-08-10 18:58:35.158381 |       File "tempest/common/waiters.py", line 96, in wait_for_server_status
2017-08-10 18:58:35.158407 |         raise lib_exc.TimeoutException(message)
2017-08-10 18:58:35.158436 |     tempest.lib.exceptions.TimeoutException: Request timed out
2017-08-10 18:58:35.158525 |     Details: (ServersAdminTestJSON:test_rebuild_server_in_error_state) Server e57c5e75-9a8b-436d-aa53-a545e32c308a failed to reach ACTIVE status and task state "None" within the required time (196 s). Current status: REBUILD. Current task state: rebuild_spawning.

Looks like this mostly shows up in cells v1 jobs, which wouldn't be
surprising if we missed some state change due to the instance sync to
the top level cell, but it's also happening sometimes in non-cells jobs.
Could be a duplicate bug where we missing or don't get a network change
/ vif plug notification from neutron so we just wait forever.

** Affects: nova
     Importance: Low
         Status: Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1709985

Title:
  test_rebuild_server_in_error_state randomly times out waiting for
  rebuilding instance to be active

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  http://logs.openstack.org/12/491012/12/check/gate-tempest-dsvm-cells-
  ubuntu-xenial/4aa3da8/console.html#_2017-08-10_18_58_35_158151

  2017-08-10 18:58:35.158151 | tempest.api.compute.admin.test_servers.ServersAdminTestJSON.test_rebuild_server_in_error_state[id-682cb127-e5bb-4f53-87ce-cb9003604442]
  2017-08-10 18:58:35.158207 | ---------------------------------------------------------------------------------------------------------------------------------------
  2017-08-10 18:58:35.158221 | 
  2017-08-10 18:58:35.158239 | Captured traceback:
  2017-08-10 18:58:35.158258 | ~~~~~~~~~~~~~~~~~~~
  2017-08-10 18:58:35.158281 |     Traceback (most recent call last):
  2017-08-10 18:58:35.158323 |       File "tempest/api/compute/admin/test_servers.py", line 188, in test_rebuild_server_in_error_state
  2017-08-10 18:58:35.158346 |         raise_on_error=False)
  2017-08-10 18:58:35.158381 |       File "tempest/common/waiters.py", line 96, in wait_for_server_status
  2017-08-10 18:58:35.158407 |         raise lib_exc.TimeoutException(message)
  2017-08-10 18:58:35.158436 |     tempest.lib.exceptions.TimeoutException: Request timed out
  2017-08-10 18:58:35.158525 |     Details: (ServersAdminTestJSON:test_rebuild_server_in_error_state) Server e57c5e75-9a8b-436d-aa53-a545e32c308a failed to reach ACTIVE status and task state "None" within the required time (196 s). Current status: REBUILD. Current task state: rebuild_spawning.

  Looks like this mostly shows up in cells v1 jobs, which wouldn't be
  surprising if we missed some state change due to the instance sync to
  the top level cell, but it's also happening sometimes in non-cells
  jobs. Could be a duplicate bug where we missing or don't get a network
  change / vif plug notification from neutron so we just wait forever.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1709985/+subscriptions


Follow ups