yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #55227
[Bug 1614436] [NEW] Creation of loadbalancer fails with plug vip exception
Public bug reported:
Here is the scenario:
I have two compete nodes with following bridge mappings:
Compute node 1:
1. physnet3:br-hed0 (This is the octavia-mgt-network)
2. physnet2: br-hed2
Compute node 2:
1. physnet3:br-hed0 (This is the octavia-mgt-network)
2. physnet1:br-hed1
3. physnet2:br-hed2
Now if I create a loadbalancer with VIP in physnet1, the NOVA is scheduling the amphora image on compute node1. However as there is no physnet1 mapping in compute node 1, the octavia is failing to plug the amphora image into VIP network.
Expected result:
Octavia should internally check if the availability zone on which nova is scheduling the amphora image has the mapping for the required VIP network or not.
Here is the VIP network details:
stack@padawan-cp1-c1-m1-mgmt:~/scratch/ansible/next/hos/ansible$ neutron net-show net1
---------------------------------------------------------------+
Field Value
---------------------------------------------------------------+
admin_state_up True
availability_zone_hints
availability_zones nova
created_at 2016-07-29T03:45:02
description
id cd5a5e69-f810-4f08-ad9f-72f6184754af
ipv4_address_scope
ipv6_address_scope
mtu 1500
name net1
provider:network_type vlan
provider:physical_network physnet1
provider:segmentation_id 1442
router:external False
shared False
status ACTIVE
subnets 115f7f23-68e2-4cba-9209-97d362612a7f
tags
tenant_id 6b192dcb6a704f72b039d0552bec5e11
updated_at 2016-07-29T03:45:02
---------------------------------------------------------------+
stack@padawan-cp1-c1-m1-mgmt:~/scratch/ansible/next/hos/ansible$
Here is the exception from octavia-worker.log:
"/var/log/octavia/octavia-worker.log" [readonly] 2554L, 591063C 1,1 Top
2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher result = func(ctxt, **new_args)
2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher File "/opt/stack/venv/octavia-20160727T193208Z/lib/python2.7/site-packages/octavia/controller/queue/endpoint.py", line 45, in create_load_balancer
2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher self.worker.create_load_balancer(load_balancer_id)
2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher File "/opt/stack/venv/octavia-20160727T193208Z/lib/python2.7/site-packages/octavia/controller/worker/controller_worker.py", line 322, in create_load_balancer
2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher post_lb_amp_assoc.run()
2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher File "/opt/stack/venv/octavia-20160727T193208Z/lib/python2.7/site-packages/taskflow/engines/action_engine/engine.py", line 230, in run
2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher for _state in self.run_iter(timeout=timeout):
2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher File "/opt/stack/venv/octavia-20160727T193208Z/lib/python2.7/site-packages/taskflow/engines/action_engine/engine.py", line 308, in run_iter
2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher failure.Failure.reraise_if_any(fails)
2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher File "/opt/stack/venv/octavia-20160727T193208Z/lib/python2.7/site-packages/taskflow/types/failure.py", line 336, in reraise_if_any
2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher failures[0].reraise()
2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher File "/opt/stack/venv/octavia-20160727T193208Z/lib/python2.7/site-packages/taskflow/types/failure.py", line 343, in reraise
2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher six.reraise(*self._exc_info)
2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher File "/opt/stack/venv/octavia-20160727T193208Z/lib/python2.7/site-packages/taskflow/engines/action_engine/executor.py", line 82, in _execute_task
2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher result = task.execute(**arguments)
2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher File "/opt/stack/venv/octavia-20160727T193208Z/lib/python2.7/site-packages/octavia/controller/worker/tasks/network_tasks.py", line 279, in execute
2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher loadbalancer.vip)
2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher File "/opt/stack/venv/octavia-20160727T193208Z/lib/python2.7/site-packages/octavia/network/drivers/neutron/allowed_address_pairs.py", line 278, in plug_vip
2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher subnet.network_id)
2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher File "/opt/stack/venv/octavia-20160727T193208Z/lib/python2.7/site-packages/octavia/network/drivers/neutron/allowed_address_pairs.py", line 93, in _plug_amphora_vip
2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher raise base.PlugVIPException(message)
2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher PlugVIPException: Error plugging amphora (compute_id: 85e75aeb-4ce8-4f26-89ed-6b6a9a006b12) into vip network cd5a5e69-f810-4f08-ad9f-72f6184754af.
2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher
The same issue will arise w.r.t spare pools.
This issue is seen in stable/mitaka
** Affects: neutron
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1614436
Title:
Creation of loadbalancer fails with plug vip exception
Status in neutron:
New
Bug description:
Here is the scenario:
I have two compete nodes with following bridge mappings:
Compute node 1:
1. physnet3:br-hed0 (This is the octavia-mgt-network)
2. physnet2: br-hed2
Compute node 2:
1. physnet3:br-hed0 (This is the octavia-mgt-network)
2. physnet1:br-hed1
3. physnet2:br-hed2
Now if I create a loadbalancer with VIP in physnet1, the NOVA is scheduling the amphora image on compute node1. However as there is no physnet1 mapping in compute node 1, the octavia is failing to plug the amphora image into VIP network.
Expected result:
Octavia should internally check if the availability zone on which nova is scheduling the amphora image has the mapping for the required VIP network or not.
Here is the VIP network details:
stack@padawan-cp1-c1-m1-mgmt:~/scratch/ansible/next/hos/ansible$ neutron net-show net1
---------------------------------------------------------------+
Field Value
---------------------------------------------------------------+
admin_state_up True
availability_zone_hints
availability_zones nova
created_at 2016-07-29T03:45:02
description
id cd5a5e69-f810-4f08-ad9f-72f6184754af
ipv4_address_scope
ipv6_address_scope
mtu 1500
name net1
provider:network_type vlan
provider:physical_network physnet1
provider:segmentation_id 1442
router:external False
shared False
status ACTIVE
subnets 115f7f23-68e2-4cba-9209-97d362612a7f
tags
tenant_id 6b192dcb6a704f72b039d0552bec5e11
updated_at 2016-07-29T03:45:02
---------------------------------------------------------------+
stack@padawan-cp1-c1-m1-mgmt:~/scratch/ansible/next/hos/ansible$
Here is the exception from octavia-worker.log:
"/var/log/octavia/octavia-worker.log" [readonly] 2554L, 591063C 1,1 Top
2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher result = func(ctxt, **new_args)
2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher File "/opt/stack/venv/octavia-20160727T193208Z/lib/python2.7/site-packages/octavia/controller/queue/endpoint.py", line 45, in create_load_balancer
2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher self.worker.create_load_balancer(load_balancer_id)
2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher File "/opt/stack/venv/octavia-20160727T193208Z/lib/python2.7/site-packages/octavia/controller/worker/controller_worker.py", line 322, in create_load_balancer
2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher post_lb_amp_assoc.run()
2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher File "/opt/stack/venv/octavia-20160727T193208Z/lib/python2.7/site-packages/taskflow/engines/action_engine/engine.py", line 230, in run
2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher for _state in self.run_iter(timeout=timeout):
2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher File "/opt/stack/venv/octavia-20160727T193208Z/lib/python2.7/site-packages/taskflow/engines/action_engine/engine.py", line 308, in run_iter
2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher failure.Failure.reraise_if_any(fails)
2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher File "/opt/stack/venv/octavia-20160727T193208Z/lib/python2.7/site-packages/taskflow/types/failure.py", line 336, in reraise_if_any
2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher failures[0].reraise()
2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher File "/opt/stack/venv/octavia-20160727T193208Z/lib/python2.7/site-packages/taskflow/types/failure.py", line 343, in reraise
2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher six.reraise(*self._exc_info)
2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher File "/opt/stack/venv/octavia-20160727T193208Z/lib/python2.7/site-packages/taskflow/engines/action_engine/executor.py", line 82, in _execute_task
2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher result = task.execute(**arguments)
2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher File "/opt/stack/venv/octavia-20160727T193208Z/lib/python2.7/site-packages/octavia/controller/worker/tasks/network_tasks.py", line 279, in execute
2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher loadbalancer.vip)
2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher File "/opt/stack/venv/octavia-20160727T193208Z/lib/python2.7/site-packages/octavia/network/drivers/neutron/allowed_address_pairs.py", line 278, in plug_vip
2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher subnet.network_id)
2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher File "/opt/stack/venv/octavia-20160727T193208Z/lib/python2.7/site-packages/octavia/network/drivers/neutron/allowed_address_pairs.py", line 93, in _plug_amphora_vip
2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher raise base.PlugVIPException(message)
2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher PlugVIPException: Error plugging amphora (compute_id: 85e75aeb-4ce8-4f26-89ed-6b6a9a006b12) into vip network cd5a5e69-f810-4f08-ad9f-72f6184754af.
2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher
The same issue will arise w.r.t spare pools.
This issue is seen in stable/mitaka
To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1614436/+subscriptions
Follow ups