yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #34733
[Bug 1403001] Re: LBaaS VIP does not work with IPv6 addresses because haproxy cannot bind socket
** Changed in: neutron
Status: Fix Committed => Fix Released
** Changed in: neutron
Milestone: None => liberty-1
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1403001
Title:
LBaaS VIP does not work with IPv6 addresses because haproxy cannot
bind socket
Status in OpenStack Neutron (virtual network service):
Fix Released
Bug description:
Description of problem:
=======================
IPv6 VIP remains in ERROR state due to haproxy cannot bind socket.
neutron.services.loadbalancer.agent.agent_manager Traceback (most recent call last):
neutron.services.loadbalancer.agent.agent_manager File "/usr/lib/python2.7/site-packages/neutron/services/loadbalancer/agent/agent_manager.py", line 214, in create_vip
neutron.services.loadbalancer.agent.agent_manager driver.create_vip(vip)
neutron.services.loadbalancer.agent.agent_manager File "/usr/lib/python2.7/site-packages/neutron/services/loadbalancer/drivers/haproxy/namespace_driver.py", line 318, in create_vip
neutron.services.loadbalancer.agent.agent_manager self._refresh_device(vip['pool_id'])
neutron.services.loadbalancer.agent.agent_manager File "/usr/lib/python2.7/site-packages/neutron/services/loadbalancer/drivers/haproxy/namespace_driver.py", line 315, in _refresh_device
neutron.services.loadbalancer.agent.agent_manager self.deploy_instance(logical_config)
neutron.services.loadbalancer.agent.agent_manager File "/usr/lib/python2.7/site-packages/neutron/openstack/common/lockutils.py", line 249, in inner
neutron.services.loadbalancer.agent.agent_manager return f(*args, **kwargs)
neutron.services.loadbalancer.agent.agent_manager File "/usr/lib/python2.7/site-packages/neutron/services/loadbalancer/drivers/haproxy/namespace_driver.py", line 311, in deploy_instance
neutron.services.loadbalancer.agent.agent_manager self.create(logical_config)
neutron.services.loadbalancer.agent.agent_manager File "/usr/lib/python2.7/site-packages/neutron/services/loadbalancer/drivers/haproxy/namespace_driver.py", line 92, in create
neutron.services.loadbalancer.agent.agent_manager self._spawn(logical_config)
neutron.services.loadbalancer.agent.agent_manager File "/usr/lib/python2.7/site-packages/neutron/services/loadbalancer/drivers/haproxy/namespace_driver.py", line 115, in _spawn
neutron.services.loadbalancer.agent.agent_manager ns.netns.execute(cmd)
neutron.services.loadbalancer.agent.agent_manager File "/usr/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py", line 550, in execute
neutron.services.loadbalancer.agent.agent_manager check_exit_code=check_exit_code, extra_ok_codes=extra_ok_codes)
neutron.services.loadbalancer.agent.agent_manager File "/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py", line 84, in execute
neutron.services.loadbalancer.agent.agent_manager raise RuntimeError(m)
neutron.services.loadbalancer.agent.agent_manager RuntimeError:
neutron.services.loadbalancer.agent.agent_manager Command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec',
Version-Release number of selected component (if applicable):
=============================================================
openstack-neutron-2014.2.1-2.el7ost.noarch
haproxy-1.5.2-3.el7_0.x86_64
How reproducible:
=================
2/2
Steps to Reproduce:
===================
1. Spawn Two instances and wait for them to become active
Via tenant_a:
nova boot tenant_a_instance --flavor m1.small --image <image_id> --min-count 2 --key-name tenant_a_keypair --security-groups default --nic net-id=<internal_ipv4_a_id> --nic net-id=<tenant_a_radvd_stateful_id>
2. Retrive your instances IPv6 addresses, tenant id and the subnet id you are about to use.
You may use any IPv6 subnet, in this example we'll use tenant_a_radvd_stateful_subnet
# nova list | awk '/tenant_a_instance/ {print $12}' | cut -d"=" -f2 | sed -e s/\;\//
# neutron subnet-list | awk '/tenant_a_radvd_stateful_subnet/ {print $2}'
3. Create a LBaaS pool
# neutron lb-pool-create --lb-method ROUND_ROBIN --name Ipv6_LBaaS --protocol HTTP --subnet-id c54f8745-2aba-42da-8845-15050db1d5d1
4. Add members to the pool
# neutron lb-member-create Ipv6_LBaaS --address 2001:65:65:65:f816:3eff:feda:b05e --protocol-port 80
# neutron lb-member-create Ipv6_LBaaS --address 2001:65:65:65:f816:3eff:fe82:5d8 --protocol-port 80
5. Create a VIP:
# neutron lb-vip-create Ipv6_LBaaS --name Ipv6_LBaaS_VIP --protocol-port 80 --protocol HTTP --subnet-id 0458273a-efe8-4d37-b2a0-e11cbd5e4d13
6. Check the VIP status:
# neutron lb-vip-show Ipv6_LBaaS_VIP | grep status
Actual results:
===============
1. status = ERROR
2. lbaas-agent.log (attached):
TRACE neutron.services.loadbalancer.agent.agent_manager Stderr: '[ALERT] 349/101731 (20878) : Starting frontend fcb9db64-e877-4e95-a86f-fed6d1b244c2: cannot bind socket [2001:64:64:64::a:80]\n'
Expected results:
=================
IPv6 VIP should work.
Additional info:
================
1. Tested with RHEL7
2. haproxy configuration:
global
daemon
user nobody
group haproxy
log /dev/log local0
log /dev/log local1 notice
stats socket /var/lib/neutron/lbaas/2c18a738-05f4-4099-8348-94575c9ed290/sock mode 0666 level user
defaults
log global
retries 3
option redispatch
timeout connect 5000
timeout client 50000
timeout server 50000
frontend cb833240-d5ed-43b9-9ef1-5bc70e961366
option tcplog
bind 2001:65:65:65:f816:3eff:fe86:d7ce:80
mode http
default_backend 2c18a738-05f4-4099-8348-94575c9ed290
option forwardfor
backend 2c18a738-05f4-4099-8348-94575c9ed290
mode http
balance roundrobin
option forwardfor
timeout check 3s
option httpchk GET /
http-check expect rstatus 200
server a2b475f0-3247-49d4-8e04-bf570ffc9fb2 2001:65:65:65:f816:3eff:fe82:5d8:80 weight 1 check inter 3s fall 1
server ab96b468-3950-47ea-a37b-f9b9fab7485b 2001:65:65:65:f816:3eff:feda:b05e:80 weight 1 check inter 3s fall 1
To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1403001/+subscriptions
References