yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #16940
[Bug 1338470] [NEW] LBaaS Round Robin does not work as expected
Public bug reported:
Description of problem:
=======================
I configured a load balancing pool with 2 members using round robin mechanism.
My expectation was that each request will be directed to the next available pool member.
Meaning, the expected result was:
Req #1 -> Member #1
Req #2 -> Member #2
Req #3 -> Member #1
Req #4 -> Member #2
etc..
I configured the instances guest image to replay to the request with the private ip address of the instance, and by that i can easily see who handled the request.
This is the result I witnessed:
# for i in {1..10} ; do curl -s 192.168.170.9 ; echo ; done
192.168.208.4
192.168.208.4
192.168.208.2
192.168.208.2
192.168.208.4
192.168.208.4
192.168.208.2
192.168.208.4
192.168.208.2
192.168.208.4
Details about the pool: http://pastebin.com/index/MwRX7HCR
Version-Release number of selected component (if applicable):
=============================================================
Icehouse:
python-neutronclient-2.3.4-2
python-neutron-2014.1-35
openstack-neutron-2014.1-35
openstack-neutron-openvswitch-2014.1-35
haproxy-1.5-0.3.dev22.el7
How reproducible:
=================
100%
Steps to Reproduce:
===================
1. As detailed above, configure a LB pool with round robin and two members.
2.
Additional info:
================
Tested with RHEL7
haproxy.cfg: http://pastebin.com/vuNe1p7H
** Affects: neutron
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1338470
Title:
LBaaS Round Robin does not work as expected
Status in OpenStack Neutron (virtual network service):
New
Bug description:
Description of problem:
=======================
I configured a load balancing pool with 2 members using round robin mechanism.
My expectation was that each request will be directed to the next available pool member.
Meaning, the expected result was:
Req #1 -> Member #1
Req #2 -> Member #2
Req #3 -> Member #1
Req #4 -> Member #2
etc..
I configured the instances guest image to replay to the request with the private ip address of the instance, and by that i can easily see who handled the request.
This is the result I witnessed:
# for i in {1..10} ; do curl -s 192.168.170.9 ; echo ; done
192.168.208.4
192.168.208.4
192.168.208.2
192.168.208.2
192.168.208.4
192.168.208.4
192.168.208.2
192.168.208.4
192.168.208.2
192.168.208.4
Details about the pool: http://pastebin.com/index/MwRX7HCR
Version-Release number of selected component (if applicable):
=============================================================
Icehouse:
python-neutronclient-2.3.4-2
python-neutron-2014.1-35
openstack-neutron-2014.1-35
openstack-neutron-openvswitch-2014.1-35
haproxy-1.5-0.3.dev22.el7
How reproducible:
=================
100%
Steps to Reproduce:
===================
1. As detailed above, configure a LB pool with round robin and two members.
2.
Additional info:
================
Tested with RHEL7
haproxy.cfg: http://pastebin.com/vuNe1p7H
To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1338470/+subscriptions
Follow ups
References