yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #91273
[Bug 2006953] [NEW] [stable/wallaby] test_established_tcp_session_after_re_attachinging_sg is unstable on ML2/OVS iptables_hybrid job
Public bug reported:
In recent weeks this job started to fail regularly on stable/wallaby (not 100% some backports are passing)
neutron_tempest_plugin.scenario.test_security_groups.NetworkSecGroupTest.test_established_tcp_session_after_re_attachinging_sg
Most of the time this was seen in neutron-tempest-plugin-scenario-
openvswitch-iptables_hybrid-wallaby job
Two recent backports affected by issue:
https://review.opendev.org/c/openstack/neutron/+/871759
https://review.opendev.org/c/openstack/neutron/+/868087
And some sample logs:
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_26b/871759/2/check/neutron-tempest-plugin-scenario-openvswitch-iptables_hybrid-wallaby/26b64ec/testr_results.html
https://f450bef156b424c8c132-a0541882d2023eca9a1cc07087449de0.ssl.cf1.rackcdn.com/868087/3/check/neutron-tempest-plugin-scenario-openvswitch-iptables_hybrid-wallaby/55a6ead/testr_results.html
Traceback (most recent call last):
File "/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/neutron_tempest_plugin/common/utils.py", line 80, in wait_until_true
eventlet.sleep(sleep)
File "/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/eventlet/greenthread.py", line 36, in sleep
hub.switch()
File "/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/eventlet/hubs/hub.py", line 313, in switch
return self.greenlet.switch()
eventlet.timeout.Timeout: 10 seconds
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/neutron_tempest_plugin/scenario/test_security_groups.py", line 320, in test_established_tcp_session_after_re_attachinging_sg
con.test_connection(should_pass=False)
File "/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/neutron_tempest_plugin/common/utils.py", line 202, in test_connection
wait_until_true(
File "/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/neutron_tempest_plugin/common/utils.py", line 85, in wait_until_true
raise WaitTimeout("Timed out after %d seconds" % timeout)
neutron_tempest_plugin.common.utils.WaitTimeout: Timed out after 10 seconds
I wonder if this is similar to
https://bugs.launchpad.net/neutron/+bug/1936911 where the test was
unstable on linuxbridge backend
** Affects: neutron
Importance: Undecided
Status: New
** Tags: gate-failure stable
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2006953
Title:
[stable/wallaby] test_established_tcp_session_after_re_attachinging_sg
is unstable on ML2/OVS iptables_hybrid job
Status in neutron:
New
Bug description:
In recent weeks this job started to fail regularly on stable/wallaby (not 100% some backports are passing)
neutron_tempest_plugin.scenario.test_security_groups.NetworkSecGroupTest.test_established_tcp_session_after_re_attachinging_sg
Most of the time this was seen in neutron-tempest-plugin-scenario-
openvswitch-iptables_hybrid-wallaby job
Two recent backports affected by issue:
https://review.opendev.org/c/openstack/neutron/+/871759
https://review.opendev.org/c/openstack/neutron/+/868087
And some sample logs:
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_26b/871759/2/check/neutron-tempest-plugin-scenario-openvswitch-iptables_hybrid-wallaby/26b64ec/testr_results.html
https://f450bef156b424c8c132-a0541882d2023eca9a1cc07087449de0.ssl.cf1.rackcdn.com/868087/3/check/neutron-tempest-plugin-scenario-openvswitch-iptables_hybrid-wallaby/55a6ead/testr_results.html
Traceback (most recent call last):
File "/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/neutron_tempest_plugin/common/utils.py", line 80, in wait_until_true
eventlet.sleep(sleep)
File "/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/eventlet/greenthread.py", line 36, in sleep
hub.switch()
File "/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/eventlet/hubs/hub.py", line 313, in switch
return self.greenlet.switch()
eventlet.timeout.Timeout: 10 seconds
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/neutron_tempest_plugin/scenario/test_security_groups.py", line 320, in test_established_tcp_session_after_re_attachinging_sg
con.test_connection(should_pass=False)
File "/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/neutron_tempest_plugin/common/utils.py", line 202, in test_connection
wait_until_true(
File "/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/neutron_tempest_plugin/common/utils.py", line 85, in wait_until_true
raise WaitTimeout("Timed out after %d seconds" % timeout)
neutron_tempest_plugin.common.utils.WaitTimeout: Timed out after 10 seconds
I wonder if this is similar to
https://bugs.launchpad.net/neutron/+bug/1936911 where the test was
unstable on linuxbridge backend
To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2006953/+subscriptions
Follow ups