yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #64717
[Bug 1697243] [NEW] ovs bridge flow table is dropped by unkown cause
Public bug reported:
Hi,
My openstack has a provider network with ovs bridge is "provision", it
has been running fine but found it is network breakdown after several
hours,I found it's flow table is empty.
Is there a way to trace a bridge's flow table changement?
[root@cloud-sz-master-b12-01 neutron]# ovs-ofctl dump-flows provision
NXST_FLOW reply (xid=0x4):
[root@cloud-sz-master-b12-02 nova]# ovs-ofctl dump-flows provision
NXST_FLOW reply (xid=0x4):
[root@cloud-sz-master-b12-02 nova]#
[root@cloud-sz-master-b12-02 nova]#
[root@cloud-sz-master-b12-02 nova]# ip r
...
10.53.33.0/24 dev proTvision proto kernel scope link src 10.53.33.11
10.53.128.0/24 dev docker0 proto kernel scope link src 10.53.128.1
169.254.0.0/16 dev br-ex scope link metric 1055
169.254.0.0/16 dev provision scope link metric 1056
...
[root@cloud-sz-master-b12-02 nova]# ovs-ofctl show provision
OFPT_FEATURES_REPLY (xid=0x2): dpid:0000248a075541e8
n_tables:254, n_buffers:256
capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STAS ARP_MATCH_IP
actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan mod_dl_src mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src mod_tp_dst
1(bond0): addr:24:8a:07:55:41:e8
config: 0
state: 0
speed: 0 Mbps now, 0 Mbps max
2(phy-provision): addr:76:b5:88:cc:a6:74
config: 0
state: 0
speed: 0 Mbps now, 0 Mbps max
LOCAL(provision): addr:24:8a:07:55:41:e8
config: 0
state: 0
speed: 0 Mbps now, 0 Mbps max
OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0
[root@cloud-sz-master-b12-02 nova]# ifconfig bond0
bond0: flags=5187<UP,BROADCAST,RUNNING,MASTER,MULTICAST> mtu 1500
inet6 fe80::268a:7ff:fe55:41e8 prefixlen 64 scopeid 0x20<link>
ether 24:8a:07:55:41:e8 txqueuelen 1000 (Ethernet)
RX packets 93588032 bytes 39646246456 (36.9 GiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 8655257217 bytes 27148795388 (25.2 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
[root@cloud-sz-master-b12-02 nova]# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2 (0)
MII Status: up
MII Polling Interval (ms): 0
Up Delay (ms): 0
Down Delay (ms): 0
802.3ad info
LACP rate: slow
Min links: 0
Aggregator selection policy (ad_select): stable
System priority: 65535
System MAC address: 24:8a:07:55:41:e8
Active Aggregator Info:
Aggregator ID: 19
Number of ports: 2
Actor Key: 13
Partner Key: 11073
Partner Mac Address: 38:bc:01:c2:26:a1
Slave Interface: enp4s0f0
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 24:8a:07:55:41:e8
Slave queue ID: 0
Aggregator ID: 19
Actor Churn State: none
Partner Churn State: none
Actor Churned Count: 0
Partner Churned Count: 0
details actor lacp pdu:
system priority: 65535
system mac address: 24:8a:07:55:41:e8
port key: 13
port priority: 255
port number: 1
port state: 61
details partner lacp pdu:
system priority: 32768
system mac address: 38:bc:01:c2:26:a1
oper key: 11073
port priority: 32768
port number: 43
port state: 61
Slave Interface: enp5s0f0
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 24:8a:07:55:44:64
Slave queue ID: 0
Aggregator ID: 19
Actor Churn State: none
Partner Churn State: none
Actor Churned Count: 0
Partner Churned Count: 0
details actor lacp pdu:
system priority: 65535
system mac address: 24:8a:07:55:41:e8
port key: 13
port priority: 255
port number: 2
port state: 61
details partner lacp pdu:
system priority: 32768
system mac address: 38:bc:01:c2:26:a1
oper key: 11073
port priority: 32768
port number: 91
port state: 61
** Affects: neutron
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1697243
Title:
ovs bridge flow table is dropped by unkown cause
Status in neutron:
New
Bug description:
Hi,
My openstack has a provider network with ovs bridge is "provision", it
has been running fine but found it is network breakdown after several
hours,I found it's flow table is empty.
Is there a way to trace a bridge's flow table changement?
[root@cloud-sz-master-b12-01 neutron]# ovs-ofctl dump-flows provision
NXST_FLOW reply (xid=0x4):
[root@cloud-sz-master-b12-02 nova]# ovs-ofctl dump-flows provision
NXST_FLOW reply (xid=0x4):
[root@cloud-sz-master-b12-02 nova]#
[root@cloud-sz-master-b12-02 nova]#
[root@cloud-sz-master-b12-02 nova]# ip r
...
10.53.33.0/24 dev proTvision proto kernel scope link src 10.53.33.11
10.53.128.0/24 dev docker0 proto kernel scope link src 10.53.128.1
169.254.0.0/16 dev br-ex scope link metric 1055
169.254.0.0/16 dev provision scope link metric 1056
...
[root@cloud-sz-master-b12-02 nova]# ovs-ofctl show provision
OFPT_FEATURES_REPLY (xid=0x2): dpid:0000248a075541e8
n_tables:254, n_buffers:256
capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STAS ARP_MATCH_IP
actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan mod_dl_src mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src mod_tp_dst
1(bond0): addr:24:8a:07:55:41:e8
config: 0
state: 0
speed: 0 Mbps now, 0 Mbps max
2(phy-provision): addr:76:b5:88:cc:a6:74
config: 0
state: 0
speed: 0 Mbps now, 0 Mbps max
LOCAL(provision): addr:24:8a:07:55:41:e8
config: 0
state: 0
speed: 0 Mbps now, 0 Mbps max
OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0
[root@cloud-sz-master-b12-02 nova]# ifconfig bond0
bond0: flags=5187<UP,BROADCAST,RUNNING,MASTER,MULTICAST> mtu 1500
inet6 fe80::268a:7ff:fe55:41e8 prefixlen 64 scopeid 0x20<link>
ether 24:8a:07:55:41:e8 txqueuelen 1000 (Ethernet)
RX packets 93588032 bytes 39646246456 (36.9 GiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 8655257217 bytes 27148795388 (25.2 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
[root@cloud-sz-master-b12-02 nova]# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2 (0)
MII Status: up
MII Polling Interval (ms): 0
Up Delay (ms): 0
Down Delay (ms): 0
802.3ad info
LACP rate: slow
Min links: 0
Aggregator selection policy (ad_select): stable
System priority: 65535
System MAC address: 24:8a:07:55:41:e8
Active Aggregator Info:
Aggregator ID: 19
Number of ports: 2
Actor Key: 13
Partner Key: 11073
Partner Mac Address: 38:bc:01:c2:26:a1
Slave Interface: enp4s0f0
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 24:8a:07:55:41:e8
Slave queue ID: 0
Aggregator ID: 19
Actor Churn State: none
Partner Churn State: none
Actor Churned Count: 0
Partner Churned Count: 0
details actor lacp pdu:
system priority: 65535
system mac address: 24:8a:07:55:41:e8
port key: 13
port priority: 255
port number: 1
port state: 61
details partner lacp pdu:
system priority: 32768
system mac address: 38:bc:01:c2:26:a1
oper key: 11073
port priority: 32768
port number: 43
port state: 61
Slave Interface: enp5s0f0
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 24:8a:07:55:44:64
Slave queue ID: 0
Aggregator ID: 19
Actor Churn State: none
Partner Churn State: none
Actor Churned Count: 0
Partner Churned Count: 0
details actor lacp pdu:
system priority: 65535
system mac address: 24:8a:07:55:41:e8
port key: 13
port priority: 255
port number: 2
port state: 61
details partner lacp pdu:
system priority: 32768
system mac address: 38:bc:01:c2:26:a1
oper key: 11073
port priority: 32768
port number: 91
port state: 61
To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1697243/+subscriptions
Follow ups