yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #00364
[Bug 1103967] Re: No traffic between "Network" and "Compute" (3 node setup, GRE tunnels)
will re-open if this can later be reproduced.
** Changed in: quantum
Status: New => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to quantum.
https://bugs.launchpad.net/bugs/1103967
Title:
No traffic between "Network" and "Compute" (3 node setup, GRE tunnels)
Status in OpenStack Quantum (virtual network service):
Invalid
Bug description:
Hi all,
Setup description:
3 node setup, consisting of:
(F1) -- Controller: Runs: Keystone, Swift, Glance on Swift, Nova (sans compute), Quantum server (only)
(F2) -- Compute: Runs: nova-compute, Quantum OVS plugin + agent
(F3) -- Network: Runs: Quantum OVS plugin + agent, Quantum DHCP agent.
Platform is Ubuntu 12.04. All OpenStack services except Quantum are from packages (but with custom config). OVS and Quantum pulled directly from trunk.
Problem: No GRE traffic getting through between "Compute" (F2) and
"Network" (F3)
Sympton: What happens is that when instances are spun up on "Compute"
note an IP address is "automagically" assigned but no traffic is
exchaged on the dedicated Data Network btw "Compute" and "Network". As
a result I have a (seemingly) running VM but with an empty console log
and no method to access it
Details:
Two networks are created (for two different tenants, but that's
inconsequential -- I get the same behaviour for both tenant /
networks):
"InternalNet1", was created as:
(quantum) net-create --tenant_id 9be459eedc674635bbef86a47ac8896d InternalNet1 --provider:network_type gre --provider:segmentation_id 1
(quantum) subnet-create --tenant_id 9be459eedc674635bbef86a47ac8896d --ip_version 4 InternalNet1 172.16.1.0/24 --gateway 172.16.1.1
Respectively, "SvcIntNet1" was created as:
(quantum) net-create --tenant_id 8032be08210a424687ff3622338ffe23 SvcIntNet1 --provider:network_type gre --provider:segmentation_id 999
(quantum) subnet-create --tenant_id 8032be08210a424687ff3622338ffe23 --ip_version 4 SvcIntNet1 169.254.1.0/24 --gateway 169.254.1.1
Please note that the two are assigned different tunnel IDs: 1 (Ox1) and, respectively, 999 (Ox3e7)
The remainder of this report and the attachement are from the
"Network" node. The only thing different on the Compute node (in terms
of configuration) is "local_ip" in "ovs_quantum_plugin.ini".
root@F3:~# ovs-vsctl show
e26dca84-dbfc-430a-b54f-2ee076a20b32
Bridge br-tun
Port "gre-1"
Interface "gre-1"
type: gre
options: {in_key=flow, out_key=flow, remote_ip="192.168.1.253"}
Port br-tun
Interface br-tun
type: internal
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Bridge br-ex
Port "eth0"
Interface "eth0"
Port br-ex
Interface br-ex
type: internal
Bridge br-int
Port "tap796f8a41-72"
tag: 1
Interface "tap796f8a41-72"
type: internal
Port br-int
Interface br-int
type: internal
Port "tap906fc936-96"
tag: 2
Interface "tap906fc936-96"
type: internal
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Notes:
"tap796f8a41-72" is the interface used as DHCP server for "SvcIntNet1" . MAC addr: fa:16:3e:41:c0:24).
"tap906fc936-96" is the interface used as DHCP server for "IntNet1". MAC addr: fa:16:3e:e8:bd:42
(Side note: Why do I have __two__ "dnsmasq" processes spinning for
each of those two interfaces i.e. total of four ?)
Details about "br-tun"
root@F3:~# ovs-ofctl show br-tun
OFPT_FEATURES_REPLY (xid=0x1): dpid:0000fa996f15ac46
n_tables:255, n_buffers:256
capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
actions: OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST SET_NW_SRC SET_NW_DST SET_NW_TOS SET_TP_SRC SET_TP_DST ENQUEUE
2(patch-int): addr:16:ad:9d:a7:35:00
config: 0
state: 0
speed: 100 Mbps now, 100 Mbps max
4(gre-1): addr:2e:2a:fb:4d:14:54
config: 0
state: 0
speed: 100 Mbps now, 100 Mbps max
LOCAL(br-tun): addr:fa:99:6f:15:ac:46
config: PORT_DOWN
state: LINK_DOWN
speed: 100 Mbps now, 100 Mbps max
OFPT_GET_CONFIG_REPLY (xid=0x3): frags=normal miss_send_len=0
root@F3:~# ovs-ofctl show br-tun
OFPT_FEATURES_REPLY (xid=0x1): dpid:0000fa996f15ac46
n_tables:255, n_buffers:256
capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
actions: OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST SET_NW_SRC SET_NW_DST SET_NW_TOS SET_TP_SRC SET_TP_DST ENQUEUE
2(patch-int): addr:16:ad:9d:a7:35:00
config: 0
state: 0
speed: 100 Mbps now, 100 Mbps max
4(gre-1): addr:2e:2a:fb:4d:14:54
config: 0
state: 0
speed: 100 Mbps now, 100 Mbps max
LOCAL(br-tun): addr:fa:99:6f:15:ac:46
config: PORT_DOWN
state: LINK_DOWN
speed: 100 Mbps now, 100 Mbps max
OFPT_GET_CONFIG_REPLY (xid=0x3): frags=normal miss_send_len=0
Now, the only thing I can tell from this output is that traffic to "tap796f8a41-72" (MAC addr: fa:16:3e:41:c0:24) is internally tagged with VLAN Id "1" whereas traffic to "tap906fc936-96" (MAC addr: fa:16:3e:e8:bd:42) is internally tagged with VLAN Id "2".
Beyond that, I'm totally clueless if that configuration is sane or
not.
Attached is the startup logs for OVS plugin+agent (again, running on
the "Network" node).
My questions:
1) As per above, when the instance is spun up I see that it, somehow
is "automagically" assigned an IP address with no actual traffic
flowing through the Data Network (?!?!)
2) How can I test the GRE traffic communication btw. "Network" and
"Compute" ? Under a "normal" (i.e. non-Quantum driven) GRE setup with
OVS what I normally do is assign IP addresses on the internal
interface of the "br-tun" bridge on both sides of the GRE tunnel (i.e.
both "Controller" and "Node") and get ping traffic through, nicely
encapsualted in GRE. However, attempting to do so in this case
doesn't work
3) How can I test (either from the "Compute" node, but, ideally, from
"Network" node) the connectivity to my instances ? Again, they
(aparently) are spun up but with an empty console log (not entirely
sure if that's a Nova issue though...)
TIA for all the help and advice,
Florian
-----
root@F3:~# cat /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini
[DATABASE]
sql_connection = mysql://quantumdbadmin:quantumdbadminpasswd@10.7.68.179/ovs_quantum
sql_max_retries = -1
reconnect_interval = 2
[OVS]
enable_tunneling=True
tenant_network_type=gre
tunnel_id_ranges=1:1000
# When using tunneling this should be reset from the default "default:2000:3999" to empty list
# network_vlan_ranges =
# only if node is running the agent -- i.e. NOT on the controller
local_ip=192.168.1.254
#
# These are the defaults. Listing them nonetheless
integration_bridge = br-int
tunnel_bridge = br-tun
#
# Peer patch port in integration bridge for tunnel bridge
int_peer_patch_port = patch-tun
# Peer patch port in tunnel bridge for integration bridge
tun_peer_patch_port = patch-int
[AGENT]
rpc = True
# Agent's polling interval in seconds
polling_interval = 2
# Use "sudo quantum-rootwrap /etc/quantum/rootwrap.conf" to use the real
# root filter facility.
# Change to "sudo" to skip the filtering and just run the comand directly
root_helper = sudo
To manage notifications about this bug go to:
https://bugs.launchpad.net/quantum/+bug/1103967/+subscriptions