yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #56020
[Bug 1620761] [NEW] test_create_second_image_when_first_image_is_being_saved intermittently times out in teardown in cells v1 job
Public bug reported:
I've been noticing this failure more often lately:
2016-09-02 17:06:30.570025 | tempest.api.compute.images.test_images_oneserver_negative.ImagesOneServerNegativeTestJSON.test_create_second_image_when_first_image_is_being_saved[id-0460efcf-ee88-4f94-acef-1bf658695456,negative]
2016-09-02 17:06:30.570109 | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
2016-09-02 17:06:30.570116 |
2016-09-02 17:06:30.570128 | Captured traceback:
2016-09-02 17:06:30.570140 | ~~~~~~~~~~~~~~~~~~~
2016-09-02 17:06:30.570158 | Traceback (most recent call last):
2016-09-02 17:06:30.570194 | File "tempest/api/compute/images/test_images_oneserver_negative.py", line 38, in tearDown
2016-09-02 17:06:30.570211 | self.server_check_teardown()
2016-09-02 17:06:30.570241 | File "tempest/api/compute/base.py", line 164, in server_check_teardown
2016-09-02 17:06:30.570267 | cls.server_id, 'ACTIVE')
2016-09-02 17:06:30.570295 | File "tempest/common/waiters.py", line 95, in wait_for_server_status
2016-09-02 17:06:30.570315 | raise exceptions.TimeoutException(message)
2016-09-02 17:06:30.570337 | tempest.exceptions.TimeoutException: Request timed out
2016-09-02 17:06:30.570429 | Details: (ImagesOneServerNegativeTestJSON:tearDown) Server 051f6d7d-15b3-459c-a372-902c5da15b40 failed to reach ACTIVE status and task state "None" within the required time (196 s). Current status: ACTIVE. Current task state: image_snapshot.
There are no clear failures from the nova logs from what I see. I'm also
not sure if we regressed something that is making this failure more
often in the cells v1 job, but cells v1 is inherently racy so I wouldn't
be surprised.
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22Details%3A%20(ImagesOneServerNegativeTestJSON%3AtearDown)%20Server%5C%22%20AND%20message%3A%5C%22failed%20to%20reach%20ACTIVE%20status%20and%20task%20state%20%5C%5C%5C%22None%5C%5C%5C%22%20within%20the%20required%20time%5C%22%20AND%20message%3A%5C%22Current%20status%3A%20ACTIVE.%20Current%20task%20state%3A%20image_snapshot.%5C%22%20AND%20build_name%3A%5C
%22gate-tempest-dsvm-cells%5C%22&from=7d
** Affects: nova
Importance: Undecided
Status: Confirmed
** Tags: cells
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1620761
Title:
test_create_second_image_when_first_image_is_being_saved
intermittently times out in teardown in cells v1 job
Status in OpenStack Compute (nova):
Confirmed
Bug description:
I've been noticing this failure more often lately:
2016-09-02 17:06:30.570025 | tempest.api.compute.images.test_images_oneserver_negative.ImagesOneServerNegativeTestJSON.test_create_second_image_when_first_image_is_being_saved[id-0460efcf-ee88-4f94-acef-1bf658695456,negative]
2016-09-02 17:06:30.570109 | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
2016-09-02 17:06:30.570116 |
2016-09-02 17:06:30.570128 | Captured traceback:
2016-09-02 17:06:30.570140 | ~~~~~~~~~~~~~~~~~~~
2016-09-02 17:06:30.570158 | Traceback (most recent call last):
2016-09-02 17:06:30.570194 | File "tempest/api/compute/images/test_images_oneserver_negative.py", line 38, in tearDown
2016-09-02 17:06:30.570211 | self.server_check_teardown()
2016-09-02 17:06:30.570241 | File "tempest/api/compute/base.py", line 164, in server_check_teardown
2016-09-02 17:06:30.570267 | cls.server_id, 'ACTIVE')
2016-09-02 17:06:30.570295 | File "tempest/common/waiters.py", line 95, in wait_for_server_status
2016-09-02 17:06:30.570315 | raise exceptions.TimeoutException(message)
2016-09-02 17:06:30.570337 | tempest.exceptions.TimeoutException: Request timed out
2016-09-02 17:06:30.570429 | Details: (ImagesOneServerNegativeTestJSON:tearDown) Server 051f6d7d-15b3-459c-a372-902c5da15b40 failed to reach ACTIVE status and task state "None" within the required time (196 s). Current status: ACTIVE. Current task state: image_snapshot.
There are no clear failures from the nova logs from what I see. I'm
also not sure if we regressed something that is making this failure
more often in the cells v1 job, but cells v1 is inherently racy so I
wouldn't be surprised.
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22Details%3A%20(ImagesOneServerNegativeTestJSON%3AtearDown)%20Server%5C%22%20AND%20message%3A%5C%22failed%20to%20reach%20ACTIVE%20status%20and%20task%20state%20%5C%5C%5C%22None%5C%5C%5C%22%20within%20the%20required%20time%5C%22%20AND%20message%3A%5C%22Current%20status%3A%20ACTIVE.%20Current%20task%20state%3A%20image_snapshot.%5C%22%20AND%20build_name%3A%5C
%22gate-tempest-dsvm-cells%5C%22&from=7d
To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1620761/+subscriptions
Follow ups