yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #89560
[Bug 1964117] Re: Unable to contact to IPv6 instance using ml2 ovs with ovs 2.16
** Changed in: openvswitch (Ubuntu Kinetic)
Status: In Progress => Fix Released
** Changed in: openvswitch (Ubuntu Jammy)
Status: New => Fix Released
** Changed in: openvswitch (Ubuntu Impish)
Status: New => Won't Fix
** Changed in: cloud-archive/yoga
Status: New => Fix Committed
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1964117
Title:
Unable to contact to IPv6 instance using ml2 ovs with ovs 2.16
Status in Ubuntu Cloud Archive:
New
Status in Ubuntu Cloud Archive xena series:
New
Status in Ubuntu Cloud Archive yoga series:
Fix Committed
Status in neutron:
Invalid
Status in openvswitch package in Ubuntu:
Fix Released
Status in openvswitch source package in Impish:
Won't Fix
Status in openvswitch source package in Jammy:
Fix Released
Status in openvswitch source package in Kinetic:
Fix Released
Bug description:
Connectivity is fine with OVS 2.15 but after upgrading ovs,
connectivity is lost to remote units over ipv6. The traffic appears to
be lost while being processed by the openflow firewall associated with
br-int.
The description below uses connectivity between Octavia units and
amphora to illustrate the issue but I don't think this issue is
related to Octavia.
OS: Ubuntu Focal
OVS: 2.16.0-0ubuntu2.1~cloud0
Kernel: 5.4.0-100-generic
With a fresh install of xena or after an upgrade of OVS from 2.15 (wallaby) to 2.16 (xena) connectivity from the octavia units to the amphora is broken.
* Wallaby works as expected
* Disabling port security on the octavia units octavia-health-manager-octavia-N-listen-port restores connectivity.
* The flows on br-int and br-tun are the same after the upgrade from 2.15 to 2.16
* Manually inserting permissive flows into the br-int flow table also restores connectivity.
* Testing environment is Openstack on top of Openstack.
Text below is reproduced here
https://pastebin.ubuntu.com/p/hRWMx7d9HG/ as it maybe easier to read
in a pastebin.
Below is reproduction of the issue first deploying wallaby to validate
connectivity before upgrading openvswitch.
Amphora:
$ openstack loadbalancer amphora list
+--------------------------------------+--------------------------------------+-----------+--------+-----------------------------------------+-------------+
| id | loadbalancer_id | status | role | lb_network_ip | ha_ip |
+--------------------------------------+--------------------------------------+-----------+--------+-----------------------------------------+-------------+
| 30afe97a-bcd4-4537-a621-830de87568b0 | ae840c86-768d-4aae-b804-8fddf2880c78 | ALLOCATED | MASTER | fc00:92e3:d18a:36ed:f816:3eff:fed2:32e0 | 10.42.0.254 |
| 61e66eff-e83b-4a21-bc1f-1e1a0037b191 | ae840c86-768d-4aae-b804-8fddf2880c78 | ALLOCATED | BACKUP | fc00:92e3:d18a:36ed:f816:3eff:fe69:c85b | 10.42.0.254 |
+--------------------------------------+--------------------------------------+-----------+--------+-----------------------------------------+-------------+
$ openstack router show lb-mgmt -c name -c interfaces_info
+-----------------+---------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+-----------------+---------------------------------------------------------------------------------------------------------------------------------------------------+
| interfaces_info | [{"port_id": "191a2d27-9b15-4938-a818-b48fc405a27a", "ip_address": "fc00:92e3:d18a:36ed::", "subnet_id": "8b4307a7-08a1-4f2b-a7e0-ce45a7ad0b03"}] |
| name | lb-mgmt |
+-----------------+---------------------------------------------------------------------------------------------------------------------------------------------------+
Looking at ports on that subnet there is a port for each of the octavia units (named octavia-health-manager-octavia-N-listen-port ), a port on
each of the amphora listed above and a port for the lb-mgmt router.
$ openstack port list | grep 8b4307a7-08a1-4f2b-a7e0-ce45a7ad0b03
| 0943521f-2c1f-4152-8250-48d310e3918f | octavia-health-manager-octavia-1-listen-port | fa:16:3e:70:70:c9 | ip_address='fc00:92e3:d18a:36ed:f816:3eff:fe70:70c9', subnet_id='8b4307a7-08a1-4f2b-a7e0-ce45a7ad0b03' | ACTIVE |
| 160b8854-0f20-471b-9ac4-53f8891f4edb | | fa:16:3e:45:7a:a6 | ip_address='fc00:92e3:d18a:36ed:f816:3eff:fe45:7aa6', subnet_id='8b4307a7-08a1-4f2b-a7e0-ce45a7ad0b03' | ACTIVE |
| 191a2d27-9b15-4938-a818-b48fc405a27a | | fa:16:3e:3e:bd:45 | ip_address='fc00:92e3:d18a:36ed::', subnet_id='8b4307a7-08a1-4f2b-a7e0-ce45a7ad0b03' | ACTIVE |
| 2428b1d4-0cb2-420b-81a5-5e6ae34e4557 | octavia-health-manager-octavia-2-listen-port | fa:16:3e:05:f3:2a | ip_address='fc00:92e3:d18a:36ed:f816:3eff:fe05:f32a', subnet_id='8b4307a7-08a1-4f2b-a7e0-ce45a7ad0b03' | ACTIVE |
| 2ea37e19-bd60-43cb-8191-aaf179667b1a | | fa:16:3e:d2:32:e0 | ip_address='fc00:92e3:d18a:36ed:f816:3eff:fed2:32e0', subnet_id='8b4307a7-08a1-4f2b-a7e0-ce45a7ad0b03' | ACTIVE |
| 76742ab6-39ee-4b06-a37d-f2ecad2c892a | octavia-health-manager-octavia-0-listen-port | fa:16:3e:79:b6:46 | ip_address='fc00:92e3:d18a:36ed:f816:3eff:fe79:b646', subnet_id='8b4307a7-08a1-4f2b-a7e0-ce45a7ad0b03' | ACTIVE |
| ffb3d106-7a14-4b4e-8300-2dd9ec9bc642 | | fa:16:3e:69:c8:5b | ip_address='fc00:92e3:d18a:36ed:f816:3eff:fe69:c85b', subnet_id='8b4307a7-08a1-4f2b-a7e0-ce45a7ad0b03' | ACTIVE |
The ports attached to the octavia units have port security enabled:
$ openstack port show octavia-health-manager-octavia-0-listen-port -c name -c device_owner -c security_group_ids -c port_security_enabled -c id
+-----------------------+----------------------------------------------+
| Field | Value |
+-----------------------+----------------------------------------------+
| device_owner | neutron:LOADBALANCERV2 |
| id | 76742ab6-39ee-4b06-a37d-f2ecad2c892a |
| name | octavia-health-manager-octavia-0-listen-port |
| port_security_enabled | True |
| security_group_ids | 04582e3a-3093-4158-b66e-dfdd32665108 |
+-----------------------+----------------------------------------------+
$ openstack security group rule list 04582e3a-3093-4158-b66e-dfdd32665108
+--------------------------------------+-------------+-----------+-----------+------------+-----------+-----------------------+----------------------+
| ID | IP Protocol | Ethertype | IP Range | Port Range | Direction | Remote Security Group | Remote Address Group |
+--------------------------------------+-------------+-----------+-----------+------------+-----------+-----------------------+----------------------+
| 3bde542e-6972-4f8b-898d-a773505b25eb | None | IPv4 | 0.0.0.0/0 | | egress | None | None |
| 89c4f2ed-5431-434e-bec5-343f41d28a68 | None | IPv6 | ::/0 | | egress | None | None |
| a26b3608-466a-4422-adb5-b86a6dff0c7c | ipv6-icmp | IPv6 | ::/0 | | ingress | None | None |
| bfc54f3f-8851-490e-b9a0-db48f43946ca | udp | IPv6 | ::/0 | 5555:5555 | ingress | None | None |
+--------------------------------------+-------------+-----------+-----------+------------+-----------+-----------------------+----------------------+
Connectivity between the octavia units and the amphora is working:
$ ping -c1 fc00:92e3:d18a:36ed:f816:3eff:fed2:32e0
PING fc00:92e3:d18a:36ed:f816:3eff:fed2:32e0(fc00:92e3:d18a:36ed:f816:3eff:fed2:32e0) 56 data bytes
64 bytes from fc00:92e3:d18a:36ed:f816:3eff:fed2:32e0: icmp_seq=1 ttl=64 time=3.82 ms
--- fc00:92e3:d18a:36ed:f816:3eff:fed2:32e0 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 3.820/3.820/3.820/0.000 ms
$ ping -c1 fc00:92e3:d18a:36ed:f816:3eff:fe69:c85b
PING fc00:92e3:d18a:36ed:f816:3eff:fe69:c85b(fc00:92e3:d18a:36ed:f816:3eff:fe69:c85b) 56 data bytes
64 bytes from fc00:92e3:d18a:36ed:f816:3eff:fe69:c85b: icmp_seq=1 ttl=64 time=4.12 ms
--- fc00:92e3:d18a:36ed:f816:3eff:fe69:c85b ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 4.123/4.123/4.123/0.000 ms
$ nc -zvw2 fc00:92e3:d18a:36ed:f816:3eff:fe69:c85b 9443
Connection to fc00:92e3:d18a:36ed:f816:3eff:fe69:c85b 9443 port [tcp/*] succeeded!
$ nc -zvw2 fc00:92e3:d18a:36ed:f816:3eff:fed2:32e0 9443
Connection to fc00:92e3:d18a:36ed:f816:3eff:fed2:32e0 9443 port [tcp/*] succeeded!
Take a dump of the flows before upgrade:
sudo ovs-ofctl dump-flows br-int --no-stats > br-int-ovs-2.15-flows.txt
sudo ovs-ofctl dump-flows br-tun --no-stats > br-tun-ovs-2.15-flows.txt
Switch apt sources to xena:
$ sudo sed -i -e 's/wallaby/xena/' /etc/apt/sources.list.d/cloud-archive.list
$ sudo apt update
$ apt-cache policy openvswitch-switch
openvswitch-switch:
Installed: 2.15.0-0ubuntu3.1~cloud0
Candidate: 2.16.0-0ubuntu2.1~cloud0
Version table:
2.16.0-0ubuntu2.1~cloud0 500
500 http://ubuntu-cloud.archive.canonical.com/ubuntu focal-updates/xena/main amd64 Packages
*** 2.15.0-0ubuntu3.1~cloud0 100
100 /var/lib/dpkg/status
2.13.5-0ubuntu1 500
500 http://nova.clouds.archive.ubuntu.com/ubuntu focal-updates/main amd64 Packages
2.13.3-0ubuntu0.20.04.2 500
500 http://security.ubuntu.com/ubuntu focal-security/main amd64 Packages
2.13.0-0ubuntu1 500
500 http://nova.clouds.archive.ubuntu.com/ubuntu focal/main amd64 Packages
Upgrade openvswitch-switch and restart services:
$ sudo apt install openvswitch-switch
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
openvswitch-common python3-openvswitch
Suggested packages:
openvswitch-doc
The following packages will be upgraded:
openvswitch-common openvswitch-switch python3-openvswitch
3 upgraded, 0 newly installed, 0 to remove and 79 not upgraded.
Need to get 2930 kB of archives.
After this operation, 285 kB of additional disk space will be used.
Do you want to continue? [Y/n]
Get:1 http://ubuntu-cloud.archive.canonical.com/ubuntu focal-updates/xena/main amd64 python3-openvswitch all 2.16.0-0ubuntu2.1~cloud0 [111 kB]
Get:2 http://ubuntu-cloud.archive.canonical.com/ubuntu focal-updates/xena/main amd64 openvswitch-common amd64 2.16.0-0ubuntu2.1~cloud0 [1214 kB]
Get:3 http://ubuntu-cloud.archive.canonical.com/ubuntu focal-updates/xena/main amd64 openvswitch-switch amd64 2.16.0-0ubuntu2.1~cloud0 [1604 kB]
Fetched 2930 kB in 1s (3485 kB/s)
(Reading database ... 92182 files and directories currently installed.)
Preparing to unpack .../python3-openvswitch_2.16.0-0ubuntu2.1~cloud0_all.deb ...
Unpacking python3-openvswitch (2.16.0-0ubuntu2.1~cloud0) over (2.15.0-0ubuntu3.1~cloud0) ...
Preparing to unpack .../openvswitch-common_2.16.0-0ubuntu2.1~cloud0_amd64.deb ...
Unpacking openvswitch-common (2.16.0-0ubuntu2.1~cloud0) over (2.15.0-0ubuntu3.1~cloud0) ...
Preparing to unpack .../openvswitch-switch_2.16.0-0ubuntu2.1~cloud0_amd64.deb ...
Unpacking openvswitch-switch (2.16.0-0ubuntu2.1~cloud0) over (2.15.0-0ubuntu3.1~cloud0) ...
Setting up python3-openvswitch (2.16.0-0ubuntu2.1~cloud0) ...
Setting up openvswitch-common (2.16.0-0ubuntu2.1~cloud0) ...
Setting up openvswitch-switch (2.16.0-0ubuntu2.1~cloud0) ...
Processing triggers for man-db (2.9.1-1) ...
Processing triggers for systemd (245.4-4ubuntu3.15) ...
$ sudo systemctl restart ovs-vswitchd.service
$ sudo systemctl restart neutron-openvswitch-agent
$ sudo systemctl restart neutron-l3-agent.service
Retest connectivity:
$ ping -c1 fc00:92e3:d18a:36ed:f816:3eff:fed2:32e0
PING fc00:92e3:d18a:36ed:f816:3eff:fed2:32e0(fc00:92e3:d18a:36ed:f816:3eff:fed2:32e0) 56 data bytes
--- fc00:92e3:d18a:36ed:f816:3eff:fed2:32e0 ping statistics ---
1 packets transmitted, 0 received, 100% packet loss, time 0ms
$ ping -c1 fc00:92e3:d18a:36ed:f816:3eff:fe69:c85b
PING fc00:92e3:d18a:36ed:f816:3eff:fe69:c85b(fc00:92e3:d18a:36ed:f816:3eff:fe69:c85b) 56 data bytes
--- fc00:92e3:d18a:36ed:f816:3eff:fe69:c85b ping statistics ---
1 packets transmitted, 0 received, 100% packet loss, time 0ms
$ nc -zvw2 fc00:92e3:d18a:36ed:f816:3eff:fe69:c85b 9443
nc: connect to fc00:92e3:d18a:36ed:f816:3eff:fe69:c85b port 9443 (tcp) timed out: Operation now in progress
$ nc -zvw2 fc00:92e3:d18a:36ed:f816:3eff:fed2:32e0 9443
nc: connect to fc00:92e3:d18a:36ed:f816:3eff:fed2:32e0 port 9443 (tcp) timed out: Operation now in progress
Check for changes in flows:
sudo ovs-ofctl dump-flows br-int --no-stats > br-int-ovs-2.16-flows.txt
sudo ovs-ofctl dump-flows br-tun --no-stats > br-tun-ovs-2.16-flows.txt
$ diff <(sed -e 's!cookie=[^,]*!cookie=COOKIE!g' br-int-ovs-2.15-flows.txt) <(sed -e 's!cookie=[^,]*!cookie=COOKIE!g' br-int-ovs-2.16-flows.txt)
23,25d22
< cookie=COOKIE, table=71, priority=95,icmp6,reg5=0x3,in_port=3,dl_src=fa:16:3e:79:b6:46,ipv6_src=fe80::f816:3eff:fe79:b646,icmp_type=130 actions=resubmit(,94)
< cookie=COOKIE, table=71, priority=95,icmp6,reg5=0x3,in_port=3,dl_src=fa:16:3e:79:b6:46,ipv6_src=fe80::f816:3eff:fe79:b646,icmp_type=133 actions=resubmit(,94)
< cookie=COOKIE, table=71, priority=95,icmp6,reg5=0x3,in_port=3,dl_src=fa:16:3e:79:b6:46,ipv6_src=fe80::f816:3eff:fe79:b646,icmp_type=135 actions=resubmit(,94)
29c26,28
< cookie=COOKIE, table=71, priority=95,icmp6,reg5=0x3,in_port=3,icmp_type=136,nd_target=fe80::f816:3eff:fe79:b646 actions=resubmit(,94)
---
> cookie=COOKIE, table=71, priority=95,icmp6,reg5=0x3,in_port=3,dl_src=fa:16:3e:79:b6:46,ipv6_src=fe80::f816:3eff:fe79:b646,icmp_type=130 actions=resubmit(,94)
> cookie=COOKIE, table=71, priority=95,icmp6,reg5=0x3,in_port=3,dl_src=fa:16:3e:79:b6:46,ipv6_src=fe80::f816:3eff:fe79:b646,icmp_type=133 actions=resubmit(,94)
> cookie=COOKIE, table=71, priority=95,icmp6,reg5=0x3,in_port=3,dl_src=fa:16:3e:79:b6:46,ipv6_src=fe80::f816:3eff:fe79:b646,icmp_type=135 actions=resubmit(,94)
30a30
> cookie=COOKIE, table=71, priority=95,icmp6,reg5=0x3,in_port=3,icmp_type=136,nd_target=fe80::f816:3eff:fe79:b646 actions=resubmit(,94)
32d31
< cookie=COOKIE, table=71, priority=80,udp6,reg5=0x3,in_port=3,dl_src=fa:16:3e:79:b6:46,ipv6_src=fe80::f816:3eff:fe79:b646,tp_src=546,tp_dst=547 actions=resubmit(,73)
33a33
> cookie=COOKIE, table=71, priority=80,udp6,reg5=0x3,in_port=3,dl_src=fa:16:3e:79:b6:46,ipv6_src=fe80::f816:3eff:fe79:b646,tp_src=546,tp_dst=547 actions=resubmit(,73)
37d36
< cookie=COOKIE, table=71, priority=65,ipv6,reg5=0x3,in_port=3,dl_src=fa:16:3e:79:b6:46,ipv6_src=fe80::f816:3eff:fe79:b646 actions=ct(table=72,zone=NXM_NX_REG6[0..15])
38a38
> cookie=COOKIE, table=71, priority=65,ipv6,reg5=0x3,in_port=3,dl_src=fa:16:3e:79:b6:46,ipv6_src=fe80::f816:3eff:fe79:b646 actions=ct(table=72,zone=NXM_NX_REG6[0..15])
$ diff <(sed -e 's!cookie=[^,]*!cookie=COOKIE!g' br-tun-ovs-2.15-flows.txt) <(sed -e 's!cookie=[^,]*!cookie=COOKIE!g' br-tun-ovs-2.16-flows.txt)
2d1
< cookie=COOKIE, priority=1,in_port=2 actions=resubmit(,3)
3a3
> cookie=COOKIE, priority=1,in_port=2 actions=resubmit(,3)
20,21d19
< cookie=COOKIE, table=20, priority=2,dl_vlan=1,dl_dst=fa:16:3e:45:7a:a6 actions=strip_vlan,load:0x2aa->NXM_NX_TUN_ID[],output:2
< cookie=COOKIE, table=20, priority=2,dl_vlan=1,dl_dst=fa:16:3e:3e:bd:45 actions=strip_vlan,load:0x2aa->NXM_NX_TUN_ID[],output:2
23c21,22
< cookie=COOKIE, table=20, priority=2,dl_vlan=1,dl_dst=fa:16:3e:70:70:c9 actions=strip_vlan,load:0x2aa->NXM_NX_TUN_ID[],output:4
---
> cookie=COOKIE, table=20, priority=2,dl_vlan=1,dl_dst=fa:16:3e:3e:bd:45 actions=strip_vlan,load:0x2aa->NXM_NX_TUN_ID[],output:2
> cookie=COOKIE, table=20, priority=2,dl_vlan=1,dl_dst=fa:16:3e:45:7a:a6 actions=strip_vlan,load:0x2aa->NXM_NX_TUN_ID[],output:2
25a25
> cookie=COOKIE, table=20, priority=2,dl_vlan=1,dl_dst=fa:16:3e:70:70:c9 actions=strip_vlan,load:0x2aa->NXM_NX_TUN_ID[],output:4
27,28d26
< cookie=COOKIE, table=20, hard_timeout=300, priority=1,vlan_tci=0x0001/0x0fff,dl_dst=fa:16:3e:3e:bd:45 actions=load:0->NXM_OF_VLAN_TCI[],load:0x2aa->NXM_NX_TUN_ID[],output:2
< cookie=COOKIE, table=20, hard_timeout=300, priority=1,vlan_tci=0x0001/0x0fff,dl_dst=fa:16:3e:70:70:c9 actions=load:0->NXM_OF_VLAN_TCI[],load:0x2aa->NXM_NX_TUN_ID[],output:4
30a29,30
> cookie=COOKIE, table=20, hard_timeout=300, priority=1,vlan_tci=0x0001/0x0fff,dl_dst=fa:16:3e:70:70:c9 actions=load:0->NXM_OF_VLAN_TCI[],load:0x2aa->NXM_NX_TUN_ID[],output:4
> cookie=COOKIE, table=20, hard_timeout=300, priority=1,vlan_tci=0x0001/0x0fff,dl_dst=fa:16:3e:3e:bd:45 actions=load:0->NXM_OF_VLAN_TCI[],load:0x2aa->NXM_NX_TUN_ID[],output:2
The only changes are the cookie values and the order that dumpflow has
written them out, the flows are actually unchanged
$ diff <(sed -e 's!cookie=[^,]*!cookie=COOKIE!g' br-int-ovs-2.15-flows.txt | sort) <(sed -e 's!cookie=[^,]*!cookie=COOKIE!g' br-int-ovs-2.16-flows.txt | sort)
$
$ diff <(sed -e 's!cookie=[^,]*!cookie=COOKIE!g' br-tun-ovs-2.15-flows.txt | sort) <(sed -e 's!cookie=[^,]*!cookie=COOKIE!g' br-tun-ovs-2.16-flows.txt | sort)
$
Connectivity can be restored by disabling port security on the ocatvia
ports:
$ openstack port set --no-security-group --disable-port-security
octavia-health-manager-octavia-0-listen-port
$ ping -c1 fc00:92e3:d18a:36ed:f816:3eff:fed2:32e0
PING fc00:92e3:d18a:36ed:f816:3eff:fed2:32e0(fc00:92e3:d18a:36ed:f816:3eff:fed2:32e0) 56 data bytes
64 bytes from fc00:92e3:d18a:36ed:f816:3eff:fed2:32e0: icmp_seq=1 ttl=64 time=2.96 ms
--- fc00:92e3:d18a:36ed:f816:3eff:fed2:32e0 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 2.955/2.955/2.955/0.000 ms
$ ping -c1 fc00:92e3:d18a:36ed:f816:3eff:fe69:c85b
PING fc00:92e3:d18a:36ed:f816:3eff:fe69:c85b(fc00:92e3:d18a:36ed:f816:3eff:fe69:c85b) 56 data bytes
64 bytes from fc00:92e3:d18a:36ed:f816:3eff:fe69:c85b: icmp_seq=1 ttl=64 time=2.11 ms
--- fc00:92e3:d18a:36ed:f816:3eff:fe69:c85b ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 2.113/2.113/2.113/0.000 ms
$ nc -zvw2 fc00:92e3:d18a:36ed:f816:3eff:fe69:c85b 9443
Connection to fc00:92e3:d18a:36ed:f816:3eff:fe69:c85b 9443 port [tcp/*] succeeded!
$ nc -zvw2 fc00:92e3:d18a:36ed:f816:3eff:fed2:32e0 9443
Connection to fc00:92e3:d18a:36ed:f816:3eff:fed2:32e0 9443 port [tcp/*] succeeded!
Re-enable port security
$ openstack port set --security-group
04582e3a-3093-4158-b66e-dfdd32665108 --enable-port-security octavia-
health-manager-octavia-0-listen-port
Connectivity can be also be restored by manually installing permissive
flows into the flows associated with br-int:
$ sudo ovs-ofctl add-flow br-int table=0,priority=96,icmp6,in_port=3,icmp_type=128,actions=NORMAL
$ sudo ovs-ofctl add-flow br-int table=0,priority=96,icmp6,in_port=2,icmp_type=129,actions=NORMAL
$ ping -c1 fc00:92e3:d18a:36ed:f816:3eff:fed2:32e0
PING fc00:92e3:d18a:36ed:f816:3eff:fed2:32e0(fc00:92e3:d18a:36ed:f816:3eff:fed2:32e0) 56 data bytes
64 bytes from fc00:92e3:d18a:36ed:f816:3eff:fed2:32e0: icmp_seq=1 ttl=64 time=4.23 ms
--- fc00:92e3:d18a:36ed:f816:3eff:fed2:32e0 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 4.230/4.230/4.230/0.000 ms
$ sudo ovs-ofctl add-flow br-int table=0,priority=96,ipv6,in_port=3,actions=NORMAL
$ sudo ovs-ofctl add-flow br-int table=0,priority=96,ipv6,in_port=2,actions=NORMAL
$ nc -zvw2 fc00:92e3:d18a:36ed:f816:3eff:fe69:c85b 9443
Connection to fc00:92e3:d18a:36ed:f816:3eff:fe69:c85b 9443 port [tcp/*] succeeded!
To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1964117/+subscriptions
References