yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #06011
[Bug 1224513] Re: Cannot communicate virtual and external networks with each other
Looking at this configuration I see a couple of issues please ping me
and we can walk though this.
I am PST. -8 GMT
Please email me or hit me up on IRC.
** Changed in: neutron
Assignee: (unassigned) => Micheal Thompson (mthompson-n)
** Changed in: neutron
Status: New => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1224513
Title:
Cannot communicate virtual and external networks with each other
Status in OpenStack Neutron (virtual network service):
Invalid
Bug description:
Hello everyone,
I am trying to setup openstack cloud.
Using the network architecture is the same as on image by link https://github-camo.global.ssl.fastly.net/61b2789b1e5fe8c5b9693bddff66b03a578d53b8/687474703a2f2f692e696d6775722e636f6d2f614a765a372e6a7067 from
this manual https://github.com/mseknibilel/OpenStack-Folsom-Install-guide/blob/master/OpenStack_Folsom_Install_Guide_WebVersion.rst just the Network Node and Controller Node hosted on the same hardware.
I have used many different sources of documentation but no one could help me to make external network available from VMs.
I couldn't undestand how traffic from virtual networks should going to external network. I have spent much time try to make work this configuration but I still get no any good results.
My logical scheme of network architecture is the such as on the images
by http://docs.openstack.org/trunk/openstack-
network/admin/content/app_demo_routers_with_private_networks.html,
using VLANs instead of GRE. An own router is on each tenant which
connecting to external network.
1. Cloud controller + Network Node is on the same hardware.
2. Compute Nodes.
My configuration is next.
Using Fedora 19, Openstack build 2013.1.2, Release 4.fc19.
I have configured successfully necessary services except external network only.
Compute nodes have the next configuration:
1. Management network: p3p1 - 10.10.10.0/24
2. Configured data network:
ovs-vsctl add-br br-int
ovs-vsctl add-br br-data
ovs-vsctl add-port br-data p3p2
3. External network to access to compute nodes from external lan.
ifconfig em1 10.10.109.103/24
route add default gw 10.10.109.1
file /etc/quantum/plugin.ini:
...
[OVS]
tenant_network_type = vlan
network_vlan_ranges = default:2001:3999
integration_bridge = br-int
bridge_mappings = default:br-data
...
Network Node has configured network interfaces:
1. Management network: p3p1 - 10.10.10.0/24
2. Configured data network:
ovs-vsctl add-br br-int
ovs-vsctl add-br br-data
ovs-vsctl add-port br-data p3p2
3. Configured external network
ovs-vsctl add-br br-gdc
ovs-vsctl add-br br-gdc
ovs-vsctl add-port br-data em1
ifconfig br-gdc 10.10.109.102/24
route add default gw 10.10.109.1
file /etc/quantum/plugin.ini:
...
[OVS]
tenant_network_type = vlan
network_vlan_ranges = ext:78:78,default:2002:3999
integration_bridge = br-int
bridge_mappings = ext:br-gdc,default:br-data
...
External network (nic em1 is on the each node) is 10.10.109.0/24.
The all computers connected in one Cisco 2950.
NICs p3p2 (br-int) are in a trunk ports with VLAN IDs 2001-3999
NICs em1 are tagged by Cisco with tag 78.
NICs p3p1 are also in cisco but using only for internal communications (10.10.10.0/24)
Running services on Contoller/Network Node are:
[root@lab01 ostack]# openstack-status
== Nova services ==
openstack-nova-api: active
openstack-nova-cert: inactive (disabled on boot)
openstack-nova-compute: inactive (disabled on boot)
openstack-nova-network: inactive (disabled on boot)
openstack-nova-scheduler: active
openstack-nova-volume: inactive (disabled on boot)
openstack-nova-conductor: active
== Glance services ==
openstack-glance-api: active
openstack-glance-registry: active
== Keystone service ==
openstack-keystone: active
== Horizon service ==
openstack-dashboard: active
== Quantum services ==
quantum-server: active
quantum-dhcp-agent: active
quantum-l3-agent: active
quantum-linuxbridge-agent: inactive (disabled on boot)
quantum-openvswitch-agent: active
openvswitch: active
== Cinder services ==
openstack-cinder-api: active
openstack-cinder-scheduler: active
openstack-cinder-volume: inactive (disabled on boot)
== Support services ==
mysqld: active
httpd: active
libvirtd: inactive (disabled on boot)
qpidd: active
memcached: active
Runnig services on a Compute Node are:
[root@lab03 ~]# openstack-status
== Nova services ==
openstack-nova-api: inactive (disabled on boot)
openstack-nova-cert: inactive (disabled on boot)
openstack-nova-compute: active
openstack-nova-network: inactive (disabled on boot)
openstack-nova-scheduler: inactive (disabled on boot)
openstack-nova-volume: inactive (disabled on boot)
openstack-nova-conductor: inactive (disabled on boot)
== Quantum services ==
quantum-server: inactive (disabled on boot)
quantum-dhcp-agent: inactive (disabled on boot)
quantum-l3-agent: inactive (disabled on boot)
quantum-linuxbridge-agent: inactive (disabled on boot)
quantum-openvswitch-agent: active
openvswitch: active
== Cinder services ==
openstack-cinder-api: inactive (disabled on boot)
openstack-cinder-scheduler: inactive (disabled on boot)
openstack-cinder-volume: inactive
== Support services ==
mysqld: inactive (disabled on boot)
libvirtd: active
tgtd: active
Network configuration on Compute Nodes:
[root@lab03 network-scripts]# for f in `ls ifcfg-*`; do echo $f; cat $f; echo; done
ifcfg-br-data
DEVICE=br-data
TYPE=OVSBridge
DEVICETYPE=ovs
BOOTPROTO=static
ONBOOT=yes
IPV6INIT=no
ifcfg-em1
TYPE=Ethernet
BOOTPROTO=static
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=no
DEVICE=em1
ONBOOT=yes
PEERDNS=yes
PEERROUTES=yes
IPADDR=10.10.109.53
NETMASK=255.255.255.0
GATEWAY=10.10.109.1
DNS1=10.10.111.100
ifcfg-p3p1
PEERROUTES="yes"
DEVICE=p3p1
BOOTPROTO="static"
TYPE="Ethernet"
ONBOOT="yes"
IPADDR=10.10.10.3
NETMASK=255.255.255.0
ifcfg-p3p2
DEVICE=p3p2
TYPE=OVSPort
DEVICETYPE=ovs
OVS_BRIDGE=br-data
BOOTPROTO=none
NM_CONTROLLED=no
ONBOOT=yes
IPV6INIT=no
Network configuration on Network/Controller Node:
[root@lab01 network-scripts]# for f in `ls ifcfg-*`; do echo $f; cat $f; echo; done
ifcfg-br-data
DEVICE=br-data
TYPE=OVSBridge
DEVICETYPE=ovs
BOOTPROTO=static
ONBOOT=yes
IPV6INIT=no
ifcfg-br-gdc
DEVICE=br-gdc
TYPE=OVSBridge
DEVICETYPE=ovs
BOOTPROTO=static
ONBOOT=yes
IPV6INIT=no
IPADDR=10.10.109.102
NETMASK=255.255.255.0
GATEWAY=10.10.109.1
DNS1=10.10.111.100
DNS2=10.10.111.101
ifcfg-em1
DEVICE=em1
TYPE=OVSPort
DEVICETYPE=ovs
OVS_BRIDGE=br-gdc
BOOTPROTO=none
NM_CONTROLLED=no
ONBOOT=yes
IPV6INIT=no
ifcfg-p3p1
PEERROUTES="yes"
DEVICE=p3p1
BOOTPROTO="static"
TYPE="Ethernet"
ONBOOT="yes"
IPADDR=10.10.10.1
NETMASK=255.255.255.0
ifcfg-p3p2
DEVICE=p3p2
TYPE=OVSPort
DEVICETYPE=ovs
OVS_BRIDGE=br-data
BOOTPROTO=none
NM_CONTROLLED=no
ONBOOT=yes
IPV6INIT=no
Network Node:
[root@lab01 quantum]# ovs-vsctl show
044dcb03-dea2-4b4d-8190-ca05948dc60c
Bridge br-gdc
Port br-gdc
Interface br-gdc
type: internal
Port "em1"
Interface "em1"
Port phy-br-gdc
Interface phy-br-gdc
Bridge br-data
Port "tap4678330f-d7"
Interface "tap4678330f-d7"
Port phy-br-data
Interface phy-br-data
Port "p3p2"
Interface "p3p2"
Port br-data
Interface br-data
type: internal
Port "tapbe6a8d2d-94"
Interface "tapbe6a8d2d-94"
Bridge br-int
Port "tap4cd70c2c-05"
tag: 11
Interface "tap4cd70c2c-05"
Port br-int
Interface br-int
type: internal
Port int-br-gdc
Interface int-br-gdc
Port "tapb81b0887-cf"
tag: 10
Interface "tapb81b0887-cf"
Port int-br-data
Interface int-br-data
Port "tapaae5297c-3f"
tag: 10
Interface "tapaae5297c-3f"
ovs_version: "1.10.0"
Compute Node:
[root@lab03 ~]# ovs-vsctl show
8937e062-6284-4ebf-b898-e3732d213f76
Bridge br-data
Port phy-br-data
Interface phy-br-data
Port br-data
Interface br-data
type: internal
Port "p3p2"
Interface "p3p2"
Bridge br-int
Port br-int
Interface br-int
type: internal
Port int-br-data
Interface int-br-data
Port "tap29fe8f5b-e1"
tag: 3
Interface "tap29fe8f5b-e1"
Port "tap2247c801-74"
tag: 4
Interface "tap2247c801-74"
Port "tape71ddfa7-00"
tag: 4
Interface "tape71ddfa7-00"
Port "tapc8ee90bb-a5"
tag: 3
Interface "tapc8ee90bb-a5"
ovs_version: "1.10.0"
Quantum networks
(quantum) port-list
+--------------------------------------+------+-------------------+--------------------------------------------------------------------------------------+
| id | name | mac_address | fixed_ips |
+--------------------------------------+------+-------------------+--------------------------------------------------------------------------------------+
| 04e05eee-903b-47d2-bdc9-549dc9c5b1d3 | | fa:16:3e:67:c0:9f | {"subnet_id": "8fb848b8-1684-4c2b-b6ba-22f978a97181", "ip_address": "10.10.111.200"} |
| 504416ad-4746-479a-b776-d46ec4ae6c64 | | fa:16:3e:31:62:07 | {"subnet_id": "7ff97b95-d425-4288-bf03-b2b832224549", "ip_address": "111.0.0.1"} |
| a47162cd-4ddc-4625-b880-a25a1f4e0c7d | | fa:16:3e:4b:a4:ee | {"subnet_id": "4d995b5a-8418-4e55-aa82-97a4cb90d512", "ip_address": "10.1.1.1"} |
| d37e2c01-b1fc-4e65-8931-ba06cce6ecb4 | | fa:16:3e:cd:b9:11 | {"subnet_id": "8fb848b8-1684-4c2b-b6ba-22f978a97181", "ip_address": "10.10.111.201"} |
+--------------------------------------+------+-------------------+--------------------------------------------------------------------------------------+
(quantum) net-list
+--------------------------------------+--------------+-----------------------------------------------------+
| id | name | subnets |
+--------------------------------------+--------------+-----------------------------------------------------+
| 487d375c-0b68-42d9-9de0-5c7457d71b0e | VNet_1 | 7ff97b95-d425-4288-bf03-b2b832224549 111.0.0.0/24 |
| 93d64286-1be7-4615-81c2-483635193237 | VNet_2 | 4d995b5a-8418-4e55-aa82-97a4cb90d512 10.1.1.0/24 |
| c43dba0d-08d0-496d-9ca4-a33cbecc371f | External-Net | 8fb848b8-1684-4c2b-b6ba-22f978a97181 10.10.111.0/24 |
+--------------------------------------+--------------+-----------------------------------------------------+
(quantum) router-list
+--------------------------------------+-----------+--------------------------------------------------------+
| id | name | external_gateway_info |
+--------------------------------------+-----------+--------------------------------------------------------+
| 562f9838-d08a-4666-ad84-b4b41f2acc99 | VNet_1-R1 | {"network_id": "c43dba0d-08d0-496d-9ca4-a33cbecc371f"} |
| bcc6f9b3-d04e-4076-bbae-1da0f9cfc635 | VNet_2-R1 | {"network_id": "c43dba0d-08d0-496d-9ca4-a33cbecc371f"} |
+--------------------------------------+-----------+--------------------------------------------------------+
The all internal/virtual networks are working successfully.
Subnets 111.0.0.0/24 and 10.1.1.0/24 are virtual and must have access in the world by 10.10.111.0/24
I have tried many configurations OVS bridges on Network Node for br-
gdc interface but no one does not work. Maybe something should be
tuned in Cisco (I didn't work with cisco much).
Please analyze my configuration and maybe you know the solution for
external access or have such configs in your clouds.
Config files of nova and quantum from Network Node are attached. I
able to provide any required information.
Thank you in advance
Dmitriy
To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1224513/+subscriptions