← Back to team overview

openstack team mailing list archive

Re: VM doesnt get IP

 

As long as there is ip connectivity between the networks it doesn't matter
if you are using  tunnels.


On Sat, Feb 23, 2013 at 8:09 AM, Rahul Sharma <rahulsharmaait@xxxxxxxxx>wrote:

> In one config, you have specified local_ip as 10.10.10.1 and in other as
> 192.168.3.3 . Doesn't they should belong to same network? As per the doc,
> it should be 10.10.10.3? Plus, these both belong to Data-Network, which is
> not controller-network communication but compute-network communication.
>
> -Regards
> Rahul
>
>
> On Sat, Feb 23, 2013 at 12:53 AM, Aaron Rosen <arosen@xxxxxxxxxx> wrote:
>
>> >From the network+controller node can you ping to 192.168.3.3 (just to
>> confirm there is ip connectivity between those).
>>
>> Your configs look fine to me. The issue you are having is that your
>> network+controller node doesn't have a tunnel to your HV node. I'd suggest
>> restarting  the quantum-plugin-openvswitch-agent service on both nodes and
>> see if that does the trick in order to get the agent to add the tunnel for
>> you. Perhaps you edited this file and didn't restart the agent?
>>
>> Aaron
>>
>> On Fri, Feb 22, 2013 at 10:55 AM, Guilherme Russi <
>> luisguilherme.cr@xxxxxxxxx> wrote:
>>
>>> Here is my controller + network node:
>>>
>>> cat /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini
>>> [DATABASE]
>>> # This line MUST be changed to actually run the plugin.
>>> # Example:
>>> # sql_connection = mysql://root:nova@127.0.0.1:3306/ovs_quantum
>>> # Replace 127.0.0.1 above with the IP address of the database used by the
>>> # main quantum server. (Leave it as is if the database runs on this
>>> host.)
>>> sql_connection = mysql://quantum:password@localhost:3306/quantum
>>> # Database reconnection retry times - in event connectivity is lost
>>> # set to -1 implies an infinite retry count
>>> # sql_max_retries = 10
>>> # Database reconnection interval in seconds - in event connectivity is
>>> lost
>>> reconnect_interval = 2
>>>
>>> [OVS]
>>> # (StrOpt) Type of network to allocate for tenant networks. The
>>> # default value 'local' is useful only for single-box testing and
>>> # provides no connectivity between hosts. You MUST either change this
>>> # to 'vlan' and configure network_vlan_ranges below or change this to
>>> # 'gre' and configure tunnel_id_ranges below in order for tenant
>>> # networks to provide connectivity between hosts. Set to 'none' to
>>> # disable creation of tenant networks.
>>> #
>>> # Default: tenant_network_type = local
>>> # Example: tenant_network_type = gre
>>> tenant_network_type = gre
>>>
>>> # (ListOpt) Comma-separated list of
>>> # <physical_network>[:<vlan_min>:<vlan_max>] tuples enumerating ranges
>>> # of VLAN IDs on named physical networks that are available for
>>> # allocation. All physical networks listed are available for flat and
>>> # VLAN provider network creation. Specified ranges of VLAN IDs are
>>> # available for tenant network allocation if tenant_network_type is
>>> # 'vlan'. If empty, only gre and local networks may be created.
>>> #
>>> # Default: network_vlan_ranges =
>>> # Example: network_vlan_ranges = physnet1:1000:2999
>>>
>>> # (BoolOpt) Set to True in the server and the agents to enable support
>>> # for GRE networks. Requires kernel support for OVS patch ports and
>>> # GRE tunneling.
>>> #
>>> # Default: enable_tunneling = False
>>> enable_tunneling = True
>>>
>>> # (ListOpt) Comma-separated list of <tun_min>:<tun_max> tuples
>>> # enumerating ranges of GRE tunnel IDs that are available for tenant
>>> # network allocation if tenant_network_type is 'gre'.
>>> #
>>> # Default: tunnel_id_ranges =
>>> # Example: tunnel_id_ranges = 1:1000
>>> tunnel_id_ranges = 1:1000
>>>
>>> # Do not change this parameter unless you have a good reason to.
>>> # This is the name of the OVS integration bridge. There is one per
>>> hypervisor.
>>> # The integration bridge acts as a virtual "patch bay". All VM VIFs are
>>> # attached to this bridge and then "patched" according to their network
>>> # connectivity.
>>> #
>>> # Default: integration_bridge = br-int
>>> integration_bridge = br-int
>>>
>>> # Only used for the agent if tunnel_id_ranges (above) is not empty for
>>> # the server.  In most cases, the default value should be fine.
>>> #
>>> # Default: tunnel_bridge = br-tun
>>> tunnel_bridge = br-tun
>>>
>>> # Uncomment this line for the agent if tunnel_id_ranges (above) is not
>>> # empty for the server. Set local-ip to be the local IP address of
>>> # this hypervisor.
>>> #
>>> # Default: local_ip =
>>> local_ip = 10.10.10.1
>>>
>>>
>>> And here is my compute node:
>>>
>>> cat /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini
>>> [DATABASE]
>>> # This line MUST be changed to actually run the plugin.
>>> # Example:
>>> # sql_connection = mysql://root:nova@127.0.0.1:3306/ovs_quantum
>>> # Replace 127.0.0.1 above with the IP address of the database used by the
>>> # main quantum server. (Leave it as is if the database runs on this
>>> host.)
>>> sql_connection = mysql://quantum:password@192.168.3.1:3306/quantum
>>> # Database reconnection retry times - in event connectivity is lost
>>> # set to -1 implies an infinite retry count
>>> # sql_max_retries = 10
>>> # Database reconnection interval in seconds - in event connectivity is
>>> lost
>>> reconnect_interval = 2
>>>
>>> [OVS]
>>> # (StrOpt) Type of network to allocate for tenant networks. The
>>> # default value 'local' is useful only for single-box testing and
>>> # provides no connectivity between hosts. You MUST either change this
>>> # to 'vlan' and configure network_vlan_ranges below or change this to
>>> # 'gre' and configure tunnel_id_ranges below in order for tenant
>>> # networks to provide connectivity between hosts. Set to 'none' to
>>> # disable creation of tenant networks.
>>> #
>>> # Default: tenant_network_type = local
>>> # Example: tenant_network_type = gre
>>> tenant_network_type = gre
>>>
>>> # (ListOpt) Comma-separated list of
>>> # <physical_network>[:<vlan_min>:<vlan_max>] tuples enumerating ranges
>>> # of VLAN IDs on named physical networks that are available for
>>> # allocation. All physical networks listed are available for flat and
>>> # VLAN provider network creation. Specified ranges of VLAN IDs are
>>> # available for tenant network allocation if tenant_network_type is
>>> # 'vlan'. If empty, only gre and local networks may be created.
>>> #
>>> # Default: network_vlan_ranges =
>>> # Example: network_vlan_ranges = physnet1:1000:2999
>>>
>>> # (BoolOpt) Set to True in the server and the agents to enable support
>>> # for GRE networks. Requires kernel support for OVS patch ports and
>>> # GRE tunneling.
>>> #
>>> # Default: enable_tunneling = False
>>> enable_tunneling = True
>>>
>>> # (ListOpt) Comma-separated list of <tun_min>:<tun_max> tuples
>>> # enumerating ranges of GRE tunnel IDs that are available for tenant
>>> # network allocation if tenant_network_type is 'gre'.
>>> #
>>> # Default: tunnel_id_ranges =
>>> # Example: tunnel_id_ranges = 1:1000
>>> tunnel_id_ranges = 1:1000
>>>
>>> # Do not change this parameter unless you have a good reason to.
>>> # This is the name of the OVS integration bridge. There is one per
>>> hypervisor.
>>> # The integration bridge acts as a virtual "patch bay". All VM VIFs are
>>> # attached to this bridge and then "patched" according to their network
>>> # connectivity.
>>> #
>>> # Default: integration_bridge = br-int
>>> integration_bridge = br-int
>>>
>>> # Only used for the agent if tunnel_id_ranges (above) is not empty for
>>> # the server.  In most cases, the default value should be fine.
>>> #
>>> # Default: tunnel_bridge = br-tun
>>> tunnel_bridge = br-tun
>>>
>>> # Uncomment this line for the agent if tunnel_id_ranges (above) is not
>>> # empty for the server. Set local-ip to be the local IP address of
>>> # this hypervisor.
>>> #
>>> # Default: local_ip = 10.0.0.3
>>> local_ip = 192.168.3.3
>>>
>>> The 10.10.10.1 is the Data Network from network controller (following
>>> the tutorial), and it is in the eth0:0 (I'm not pretty sure, but i guess
>>> the data from the VMs should communicate with this IP).
>>>
>>>
>>>
>>>
>>>
>>> 2013/2/22 Aaron Rosen <arosen@xxxxxxxxxx>
>>>
>>>> Running with two nics for this should be fine for tunneling as ip
>>>> routing would handle which nic the packets should go out. From what you
>>>> pasted I see that one HV has a gre tunnel setup to 10.10.10.1 <-- Who is
>>>> that host? Can you attach
>>>> your /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini on your nodes.
>>>>
>>>> I suspect the issue is a configuration issue in the [OVS] section.
>>>> You'll need something along the lines of this in that section:
>>>> [OVS]
>>>> local_ip = <ip address of HV>
>>>> enable_tunneling = True
>>>> tunnel_id_ranges = 1:1000
>>>> tenant_network_type = gre
>>>>
>>>>
>>>>
>>>>
>>>> On Fri, Feb 22, 2013 at 9:54 AM, Guilherme Russi <
>>>> luisguilherme.cr@xxxxxxxxx> wrote:
>>>>
>>>>> Hello Aaron,
>>>>>
>>>>>  Sorry about attaching the infos, about the quantum agent, is it the
>>>>> quantum-plugin-openvswitch-agent? If i was, the job is already ruunning at
>>>>> the controller and the compute nodes:
>>>>>
>>>>> service quantum-plugin-openvswitch-agent start
>>>>> start: Job is already running: quantum-plugin-openvswitch-agent
>>>>>
>>>>>  Is there another thing i should do? I'm running my controller node
>>>>> and the network node at the same machine with 2 NICs, maybe can be a
>>>>> problem how i am making my network config?
>>>>>
>>>>> Thanks again.
>>>>>
>>>>> Guilherme.
>>>>>
>>>>>
>>>>> 2013/2/22 Aaron Rosen <arosen@xxxxxxxxxx>
>>>>>
>>>>>> Hi Guilherme,
>>>>>>
>>>>>> (next time please paste these in the email rather than attaching,
>>>>>> thx).
>>>>>>
>>>>>> From the text in the attachment (show below). It seems like you are
>>>>>> not running the quantum-openvswitch-agent on your network node as there is
>>>>>> no GRE tunnel from that to your compute node. Once you have
>>>>>>  quantum-openvswitch-agent running on all your machines you should be able
>>>>>> to run ovs-dpctl looking under br-tun and see a tunnel between each host.
>>>>>>
>>>>>> Aaron
>>>>>>
>>>>>> CONTROLLER + NETWORK NODE:
>>>>>> system@br-tun:
>>>>>> lookups: hit:0 missed:0 lost:0
>>>>>> flows: 0
>>>>>>  port 0: br-tun (internal)
>>>>>> port 1: patch-int (patch: peer=patch-tun)
>>>>>> system@br-int:
>>>>>>  lookups: hit:0 missed:0 lost:0
>>>>>> flows: 0
>>>>>> port 0: br-int (internal)
>>>>>>  port 1: tap817d2f70-a0 (internal)
>>>>>> port 2: qr-ea64e9aa-31 (internal)
>>>>>> port 3: patch-tun (patch: peer=patch-int)
>>>>>> system@br-ex:
>>>>>> lookups: hit:0 missed:0 lost:0
>>>>>> flows: 0
>>>>>> port 0: br-ex (internal)
>>>>>>  port 2: qg-95fe3fa1-d1 (internal)
>>>>>>
>>>>>>
>>>>>> COMPUTE NODES
>>>>>>
>>>>>> COMPUTE NODE 01:
>>>>>> ovs-dpctl show
>>>>>> system@br-int:
>>>>>> lookups: hit:380 missed:7590 lost:0
>>>>>> flows: 0
>>>>>> port 0: br-int (internal)
>>>>>>  port 2: patch-tun (patch: peer=patch-int)
>>>>>> port 3: qvo981ae82e-d4
>>>>>> port 6: qvoc9df3a96-5f
>>>>>>  port 7: qvoc153ac28-ae
>>>>>> port 8: qvo722a5d05-e4
>>>>>> system@br-tun:
>>>>>> lookups: hit:381 missed:7589 lost:0
>>>>>>  flows: 0
>>>>>> port 0: br-tun (internal)
>>>>>> port 1: patch-int (patch: peer=patch-tun)
>>>>>>  port 2: gre-1 (gre: key=flow, remote_ip=10.10.10.1)
>>>>>>
>>>>>>
>>>>>> On Fri, Feb 22, 2013 at 8:47 AM, Guilherme Russi <
>>>>>> luisguilherme.cr@xxxxxxxxx> wrote:
>>>>>>
>>>>>>> So guys, any idea about what am i missing?
>>>>>>>
>>>>>>> Thanks.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> 2013/2/22 Guilherme Russi <luisguilherme.cr@xxxxxxxxx>
>>>>>>>
>>>>>>>> Hello Aaron,
>>>>>>>>
>>>>>>>>  Here are the outputs.
>>>>>>>>
>>>>>>>>
>>>>>>>> Thanks.
>>>>>>>>
>>>>>>>> Guilherme.
>>>>>>>>
>>>>>>>>
>>>>>>>> 2013/2/21 Aaron Rosen <arosen@xxxxxxxxxx>
>>>>>>>>
>>>>>>>>> The output to the following would be a good start:
>>>>>>>>>
>>>>>>>>> quantum net-list
>>>>>>>>> quantum port-list
>>>>>>>>> ovs-dpctl show (on all nodes)
>>>>>>>>>
>>>>>>>>> Also make sure the quantum-dhcp-agent is running on your network
>>>>>>>>> node.
>>>>>>>>>
>>>>>>>>> Aaron
>>>>>>>>>
>>>>>>>>> On Thu, Feb 21, 2013 at 11:23 AM, Guilherme Russi <
>>>>>>>>> luisguilherme.cr@xxxxxxxxx> wrote:
>>>>>>>>>
>>>>>>>>>> Sorry about that, I'm using Folsom release with quantum, I'm
>>>>>>>>>> installing the controller node and the network node in the same physical
>>>>>>>>>> machine, I'm following this tutorial:
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> http://docs.openstack.org/folsom/basic-install/content/basic-install_controller.html
>>>>>>>>>>
>>>>>>>>>> Which config files do you need?
>>>>>>>>>>
>>>>>>>>>> Thanks.
>>>>>>>>>>
>>>>>>>>>> Guilherme.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> 2013/2/21 Aaron Rosen <arosen@xxxxxxxxxx>
>>>>>>>>>>
>>>>>>>>>>> Hi,
>>>>>>>>>>>
>>>>>>>>>>> You'll have to provide more information than this for anyone to
>>>>>>>>>>> help you: i.e are you using quantum or nova-network, if your using quantum
>>>>>>>>>>> which plugin, config files etc.
>>>>>>>>>>>
>>>>>>>>>>> Aaron
>>>>>>>>>>>
>>>>>>>>>>> On Thu, Feb 21, 2013 at 11:13 AM, Guilherme Russi <
>>>>>>>>>>> luisguilherme.cr@xxxxxxxxx> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> Hello guys,
>>>>>>>>>>>>
>>>>>>>>>>>>  I'm getting problem in my VMs' creation, they don't get IP,
>>>>>>>>>>>> the log piece shows:
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> Starting network...
>>>>>>>>>>>> udhcpc (v1.18.5) started
>>>>>>>>>>>> Sending discover...
>>>>>>>>>>>> Sending discover...
>>>>>>>>>>>> Sending discover...
>>>>>>>>>>>> No lease, failing
>>>>>>>>>>>> WARN: /etc/rc3.d/S40-network failed
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>  Do you have any idea how I can solve it?
>>>>>>>>>>>>
>>>>>>>>>>>> Thank you so much.
>>>>>>>>>>>>
>>>>>>>>>>>> Guilherme.
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> _______________________________________________
>>>>>>>>>>>> Mailing list: https://launchpad.net/~openstack
>>>>>>>>>>>> Post to     : openstack@xxxxxxxxxxxxxxxxxxx
>>>>>>>>>>>> Unsubscribe : https://launchpad.net/~openstack
>>>>>>>>>>>> More help   : https://help.launchpad.net/ListHelp
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>> _______________________________________________
>> Mailing list: https://launchpad.net/~openstack
>> Post to     : openstack@xxxxxxxxxxxxxxxxxxx
>> Unsubscribe : https://launchpad.net/~openstack
>> More help   : https://help.launchpad.net/ListHelp
>>
>>
>

References