yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #89085
[Bug 1976461] Re: [Yoga] Octavia's LB VIPs not working with allow-address-pairs
Adding Neutron to the bug as it appears Neutron is not filling out the
LSP correctly in the NB DB for some reason.
For the Octavia VIP I see:
$ sudo ovn-nbctl find logical-switch-port
...
_uuid : 00c81b7f-56cb-44ab-9047-d310180f6a1b
addresses : ["fa:16:3e:8a:0f:41 192.168.0.59"]
dhcpv4_options : c8a7fb25-4c16-4ee0-9c1f-2cccef5576dd
dhcpv6_options : []
dynamic_addresses : []
enabled : false
external_ids : {"neutron:cidrs"="192.168.0.59/24", "neutron:device_id"=lb-6f63ef60-47ef-4a70-a008-016378d6adc2, "neutron:device_owner"=Octavia, "neutron:network_name"=neutron-e95d48e1-b476-49b1-9a58-5b4d5b8762b2, "neutron:port_fip"="10.78.95.100", "neutron:port_name"=octavia-lb-6f63ef60-47ef-4a70-a008-016378d6adc2, "neutron:project_id"="4956c242dd6d481a90d5d61217981e4f", "neutron:revision_number"="10", "neutron:security_group_ids"="a1faabe4-898d-42b9-b26c-d4d6c402d1b3"}
ha_chassis_group : []
name : "ae6c9e3f-5981-4d84-aeea-b71ed7c1f961"
options : {mcast_flood_reports="true", requested-chassis=""}
parent_name : []
port_security : ["fa:16:3e:8a:0f:41 192.168.0.59"]
tag : []
tag_request : []
type : ""
up : false
Here the type is not set to 'virtual', there is also missing 'virtual-
ip' and 'virtual-parents' options.
Pausing Neutron and setting these manually resolves the issue:
$ juju run-action neutron-api/0 pause
$ sudo ovn-nbctl add logical-switch-port 00c81b7f-56cb-44ab-9047-d310180f6a1b \
options virtual-ip="192.168.0.59"
$ sudo ovn-nbctl add logical-switch-port 00c81b7f-56cb-44ab-9047-d310180f6a1b \
options virtual-parents=\"bcb6da0d-bae0-48cb-9d4f-bb4676af7db2,34abcd0d-4032-4855-9af5-74125ca1a569\"
$ sudo ovn-nbctl set logical-switch-port 00c81b7f-56cb-44ab-9047-d310180f6a1b \
type=virtual
$ sudo ovn-sbctl find port-binding type=virtual
_uuid : 5dcc583f-6e2e-4bc4-833e-823d2bfe02e8
chassis : []
datapath : fa349a7e-645b-4e26-8a79-b1866f75088d
encap : []
external_ids : {name=octavia-lb-6f63ef60-47ef-4a70-a008-016378d6adc2, "neutron:cidrs"="192.168.0.59/24", "neutron:device_id"=lb-6f63ef60-47ef-4a70-a008-016378d6adc2, "neutron:device_owner"=Octavia, "neutron:network_name"=neutron-e95d48e1-b476-49b1-9a58-5b4d5b8762b2, "neutron:port_fip"="10.78.95.100", "neutron:port_name"=octavia-lb-6f63ef60-47ef-4a70-a008-016378d6adc2, "neutron:project_id"="4956c242dd6d481a90d5d61217981e4f", "neutron:revision_number"="10", "neutron:security_group_ids"="a1faabe4-898d-42b9-b26c-d4d6c402d1b3"}
gateway_chassis : []
ha_chassis_group : []
logical_port : "ae6c9e3f-5981-4d84-aeea-b71ed7c1f961"
mac : ["fa:16:3e:8a:0f:41 192.168.0.59"]
nat_addresses : []
options : {mcast_flood_reports="true", requested-chassis="", virtual-ip="192.168.0.59", virtual-parents="bcb6da0d-bae0-48cb-9d4f-bb4676af7db2,34abcd0d-4032-4855-9af5-74125ca1a569"}
parent_port : []
requested_chassis : []
tag : []
tunnel_key : 5
type : virtual
up : true
virtual_parent : []
$ curl http://192.168.0.59
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
...
** Also affects: neutron
Importance: Undecided
Status: New
** Changed in: charm-ovn-chassis
Status: New => Invalid
** Changed in: charm-ovn-central
Status: New => Invalid
** Changed in: charm-octavia
Status: New => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1976461
Title:
[Yoga] Octavia's LB VIPs not working with allow-address-pairs
Status in OpenStack Octavia Charm:
Invalid
Status in charm-ovn-central:
Invalid
Status in charm-ovn-chassis:
Invalid
Status in neutron:
New
Bug description:
Hi team,
I am currently deploying with:
juju 2.9.31
MAAS 3.1
openstack/yoga, bundle: https://pastebin.canonical.com/p/Rw376CF4Dw/
Octavia: standalone setup
When I create a LB for my kubernetes cluster, I've noticed the LB is
unresponsive if I try to reach out from one of my VMs.
I can access the LB and confirm the amphora-haproxy namespace exists,
with the network interface attached and it has both LB IP and VIP
configured to it
Trying to reach out to the LB from one of the k8s vms results in
timeout.
I can see the behavior changes according to which IP I try to connect
to on the LB.
In scenario (1): from the client VM > LB IP (not the VIP):
I can see the connection works, this is the ovs-ofctl on the hypervisor of the sending machine shows: https://pastebin.canonical.com/p/bZ77hhWgD6/
Traffic gets correctly routed to one of the GENEVE tunnels, given the VM and the LB front end IP are placed in the same tenant subnet
In scenario (2): from the client VM > LB VIP (the address-pair):
I can see the connection does not work.
ovs-ofctl from the sending hypervisor shows: https://pastebin.canonical.com/p/SBmW97yHVr/
Traffic gets dropped from the sending hypervisor.
**** DETAILS OF MY CURRENT YOGA DEPLOYMENT ****
network openstack: https://pastebin.canonical.com/p/mfrPgVjyMp/
server and LB list: https://pastebin.canonical.com/p/VKjdHzTNvD/
port list: https://pastebin.canonical.com/p/trk2CPhDzf/
ovn-nbctl show: https://pastebin.canonical.com/p/njKjWGX5gX/
ovn-nbctl details of the VIP: https://pastebin.canonical.com/p/wwQy3HH4QR/
***********************************************
**** STEPS TO REPRODUCE ****
1) Deploy Openstack/Yoga with the bundle above
2) Create 2x backend nodes on a tenant network
3) Create an LB on the same tenant network
4) Access one of the backend nodes (or create a client VM for this test)
5) Try to reach to the LB: connection times out
****************************
To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-octavia/+bug/1976461/+subscriptions