yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #75299
[Bug 1798472] [NEW] Fullstack tests fails because process is not killed properly
Public bug reported:
Fullstack tests are failing quite often recently. There are different
tests failed in CI runs but it looks that the culprit each time is the
same. Some of processes spawned during the test is not killed properly,
hangs and test got timeout exception.
Examples:
http://logs.openstack.org/97/602497/5/check/neutron-fullstack/f110a1f/logs/testr_results.html.gz
http://logs.openstack.org/68/564668/7/check/neutron-fullstack-
python36/c4223c2/logs/testr_results.html.gz
In second example it looks that some process wasn't exited properly: http://logs.openstack.org/68/564668/7/check/neutron-fullstack-python36/c4223c2/logs/dsvm-fullstack-logs/TestOvsConnectivitySameNetwork.test_connectivity_GRE-l2pop-arp_responder,openflow-native_.txt.gz#_2018-10-16_02_43_49_755
and in this example it looks that it is openvswitch-agent: http://logs.openstack.org/68/564668/7/check/neutron-fullstack-python36/c4223c2/logs/dsvm-fullstack-logs/TestOvsConnectivitySameNetwork.test_connectivity_GRE-l2pop-arp_responder,openflow-native_/neutron-openvswitch-agent--2018-10-16--02-42-43-987526.txt.gz
Looking at logs of this ovs agent it looks that there is no log like
"Agent caught SIGTERM, quitting daemon loop." at the end
** Affects: neutron
Importance: High
Status: Confirmed
** Tags: fullstack gate-failure
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1798472
Title:
Fullstack tests fails because process is not killed properly
Status in neutron:
Confirmed
Bug description:
Fullstack tests are failing quite often recently. There are different
tests failed in CI runs but it looks that the culprit each time is the
same. Some of processes spawned during the test is not killed
properly, hangs and test got timeout exception.
Examples:
http://logs.openstack.org/97/602497/5/check/neutron-fullstack/f110a1f/logs/testr_results.html.gz
http://logs.openstack.org/68/564668/7/check/neutron-fullstack-
python36/c4223c2/logs/testr_results.html.gz
In second example it looks that some process wasn't exited properly: http://logs.openstack.org/68/564668/7/check/neutron-fullstack-python36/c4223c2/logs/dsvm-fullstack-logs/TestOvsConnectivitySameNetwork.test_connectivity_GRE-l2pop-arp_responder,openflow-native_.txt.gz#_2018-10-16_02_43_49_755
and in this example it looks that it is openvswitch-agent: http://logs.openstack.org/68/564668/7/check/neutron-fullstack-python36/c4223c2/logs/dsvm-fullstack-logs/TestOvsConnectivitySameNetwork.test_connectivity_GRE-l2pop-arp_responder,openflow-native_/neutron-openvswitch-agent--2018-10-16--02-42-43-987526.txt.gz
Looking at logs of this ovs agent it looks that there is no log like
"Agent caught SIGTERM, quitting daemon loop." at the end
To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1798472/+subscriptions
Follow ups