yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #61308
[Bug 1497272] Re: L3 HA: Unstable rescheduling time for keepalived v1.2.7
Reviewed: https://review.openstack.org/430206
Committed: https://git.openstack.org/cgit/openstack/openstack-manuals/commit/?id=430909f6e15e56308c3895007e20a91f73b9412a
Submitter: Jenkins
Branch: master
commit 430909f6e15e56308c3895007e20a91f73b9412a
Author: John Davidge <john.davidge@xxxxxxxxxxxxx>
Date: Tue Feb 7 11:20:31 2017 +0000
[networking] Add a note on bug in keepalived
Describes how a bug in keepalived v1.2.15 and earlier can affect
operation of neutron features, and recommends upgrading to a greater
version to avoid problems.
Change-Id: I05de49e0043347b2cfcce3af8cf68796f70334b9
Closes-Bug: #1497272
** Changed in: openstack-manuals
Status: In Progress => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1497272
Title:
L3 HA: Unstable rescheduling time for keepalived v1.2.7
Status in neutron:
Triaged
Status in openstack-ansible:
Fix Released
Status in openstack-manuals:
Fix Released
Bug description:
I have tested work of L3 HA on environment with 3 controllers and 1 compute (Kilo) with this simple scenario:
1) ping vm by floating ip
2) disable master l3-agent (which ha_state is active)
3) wait for pings to continue and another agent became active
4) check number of packages that were lost
My results are following:
1) When max_l3_agents_per_router=2, 3 to 4 packages were lost.
2) When max_l3_agents_per_router=3 or 0 (meaning the router will be scheduled on every agent), 10 to 70 packages were lost.
I should mention that in both cases there was only one ha router.
It is expected that less packages will be lost when
max_l3_agents_per_router=3(0).
To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1497272/+subscriptions
References