← Back to team overview

openstack team mailing list archive

Fwd: Fwd: Initial quantum network state broken

 

Hey Anil, thanks for responding.  Here's the output:

root@kvm-cs-sn-10i:/var/lib/nova/instances# ovs-vsctl show
9d9f7949-2b80-40c8-a9e0-6a116200ed96
    Bridge br-int
        Port br-int
            Interface br-int
                type: internal
        Port "int-br-eth1"
            Interface "int-br-eth1"
    Bridge "br-eth1"
        Port "phy-br-eth1"
            Interface "phy-br-eth1"
        Port "br-eth1"
            Interface "br-eth1"
                type: internal
        Port "eth1"
            Interface "eth1"
    ovs_version: "1.4.3"

root@kvm-cs-sn-10i:/var/lib/nova/instances# ovs-dpctl show
system@br-eth1:
lookups: hit:5227 missed:24022 lost:0
flows: 1
port 0: br-eth1 (internal)
 port 1: eth1
port 6: phy-br-eth1
system@br-int:
lookups: hit:2994 missed:13754 lost:0
 flows: 1
port 0: br-int (internal)
port 2: int-br-eth1

root@kvm-cs-sn-10i:/var/lib/nova/instances# brctl show
bridge name bridge id STP enabled interfaces
br-eth1 0000.bc305befedd1 no eth1
phy-br-eth1
br-int 0000.8ae31e5f7941 no int-br-eth1
qbr5334a0cb-64 8000.76fb293fe9cf no qvb5334a0cb-64
 vnet0

root@kvm-cs-sn-10i:/var/lib/nova/instances# ovs-ofctl dump-flows br-int
NXST_FLOW reply (xid=0x4):
 cookie=0x0, duration=75251.156s, table=0, n_packets=16581,
n_bytes=3186436, priority=2,in_port=2 actions=drop
 cookie=0x0, duration=75251.527s, table=0, n_packets=0, n_bytes=0,
priority=1 actions=NORMAL

Thanks!  I have until tomorrow to get this working, then my boss has
mandating that I try Cloudstack.  Argh, but I'm so close!


On Wed, Feb 20, 2013 at 10:29 AM, Anil Vishnoi <vishnoianil@xxxxxxxxx>wrote:

> Hi Greg,
>
> Can you paste the output of following command from your compute node.
>
> ovs-vsctl show
> ovs-dpctl show
> brctl show
>
> ovs-ofctl dump-flows br-int
> ovs-ofctl dump-flows br-eth1
>
> Because i think first issue we need to resolve here is why DHCP packet is
> not leaving your compute host.
>
> Anil
>
>
> On Wed, Feb 20, 2013 at 6:48 AM, Greg Chavez <greg.chavez@xxxxxxxxx>wrote:
>
>>
>>
>> From my perspective, it seems that the OVS bridges are not being brought
>> up correctly.  As you can see in my earlier post, the integration bridge
>> (int) and the physical interface bridges (br-ex and br-eth1) are downed.
>>  I've tried to bring them up in promiscuous mode in the case of br-int, and
>> with the physical interfaces ported to the bridge in the case of br-ex and
>> br-eth1.  I've had no luck unfortunately.
>>
>> It seems that nothing is getting past br-int.  I can see BOOTP packets on
>> the VM side of br-int, and I can see VTP packets on the physical side of
>> br-int.  But that's where it ends.
>>
>> For example, when I reboot my VM, I see this:
>>
>> root@kvm-cs-sn-10i:/var/lib/nova/instances# tcpdump -i qvo5334a0cb-64
>>
>> tcpdump: WARNING: qvo5334a0cb-64: no IPv4 address assigned
>> tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
>> listening on qvo5334a0cb-64, link-type EN10MB (Ethernet), capture size
>> 65535 bytes
>>
>> 13:42:08.099061 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP,
>> Request from fa:16:3e:06:48:09 (oui Unknown), length 280
>> 13:42:08.101675 IP6 :: > ff02::16: HBH ICMP6, multicast listener report
>> v2, 1 group record(s), length 28
>> 13:42:08.161728 IP6 :: > ff02::1:ff06:4809: ICMP6, neighbor solicitation,
>> who has fe80::f816:3eff:fe06:4809, length 24
>> 13:42:08.373745 IP6 :: > ff02::16: HBH ICMP6, multicast listener report
>> v2, 1 group record(s), length 28
>> 13:42:11.102528 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP,
>> Request from fa:16:3e:06:48:09 (oui Unknown), length 280
>> 13:42:14.105850 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP,
>> Request from fa:16:3e:06:48:09 (oui Unknown), length 280
>>
>> But that's as far as it goes.  The dhcp agent never get this.
>>
>> I"ve tried deleting and recreating the bridges, rebooting the systems,
>> but nothing seems to work.  Maybe it's just the right combination of
>> things.  I don't know.
>>
>> Help!
>>
>>
>> On Tue, Feb 19, 2013 at 5:23 AM, Sylvain Bauza <
>> sylvain.bauza@xxxxxxxxxxxx> wrote:
>>
>>>  Hi Greg,
>>>
>>> I did have trouble with DHCP assignation (see my previous post in this
>>> list), which was being fixed by deleting ovs bridges on network node,
>>> recreating them and restarting OVS plugin and L3/DHCP agents (which were
>>> all on the same physical node).
>>> Maybe it helps.
>>>
>>> Anyway, when DHCP'ing from your VM (asking for an IP), could you please
>>> tcpdump :
>>> 1. your virtual network interface on compute node
>>> 2. your physical network interface on compute node
>>> 3. your physical network interface on network node
>>>
>>> and see BOOTP/DHCP packets ?
>>> On the physical layer, you should see GRE packets (provided you
>>> correctly followed the mentioned guide) encapsulating your BOOTP/DHCP
>>> packets.
>>>
>>> If that's OK, could you please issue the below commands (on the network
>>> node) :
>>>  - brctl show
>>>  - ip a
>>>  - ovs-vsctl show
>>>  - route -n
>>>
>>> Thanks,
>>> -Sylvain
>>>
>>> Le 19/02/2013 00:55, Greg Chavez a écrit :
>>>
>>> Third time I'm replying to my own message.  It seems like the initial
>>> network state is a problem for many first time openstackers.  Surely
>>> somewhere would be well to assist me.  I'm running out of time to make this
>>> work.  Thanks.
>>>
>>>
>>> On Sun, Feb 17, 2013 at 3:08 AM, Greg Chavez <greg.chavez@xxxxxxxxx>wrote:
>>>
>>>> I'm replying to my own message because I'm desperate.  My network
>>>> situation is a mess.  I need to add this as well: my bridge interfaces are
>>>> all down.  On my compute node:
>>>>
>>>>  root@kvm-cs-sn-10i:/var/lib/nova/instances/instance-00000005# ip addr
>>>> show | grep ^[0-9]
>>>> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
>>>> 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP
>>>> qlen 1000
>>>> 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP
>>>> qlen 1000
>>>> 4: eth2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000
>>>> 5: eth3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000
>>>> 9: br-int: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
>>>> 10: br-eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
>>>> 13: phy-br-eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
>>>> pfifo_fast state UP qlen 1000
>>>> 14: int-br-eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
>>>> pfifo_fast state UP qlen 1000
>>>> 15: qbre56c5d9e-b6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
>>>> noqueue state UP
>>>> 16: qvoe56c5d9e-b6: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500
>>>> qdisc pfifo_fast state UP qlen 1000
>>>> 17: qvbe56c5d9e-b6: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500
>>>> qdisc pfifo_fast master qbre56c5d9e-b6 state UP qlen 1000
>>>> 19: qbrb805a9c9-11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
>>>> noqueue state UP
>>>> 20: qvob805a9c9-11: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500
>>>> qdisc pfifo_fast state UP qlen 1000
>>>> 21: qvbb805a9c9-11: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500
>>>> qdisc pfifo_fast master qbrb805a9c9-11 state UP qlen 1000
>>>> 34: qbr2b23c51f-02: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
>>>> noqueue state UP
>>>> 35: qvo2b23c51f-02: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500
>>>> qdisc pfifo_fast state UP qlen 1000
>>>> 36: qvb2b23c51f-02: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500
>>>> qdisc pfifo_fast master qbr2b23c51f-02 state UP qlen 1000
>>>> 37: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
>>>> master qbr2b23c51f-02 state UNKNOWN qlen 500
>>>>
>>>>  And on my network node:
>>>>
>>>>  root@knet-cs-gen-01i:~# ip addr show | grep ^[0-9]
>>>> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
>>>> 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP
>>>> qlen 1000
>>>> 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP
>>>> qlen 1000
>>>> 4: eth2: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc mq
>>>> state UP qlen 1000
>>>> 5: eth3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000
>>>> 6: br-int: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
>>>> 7: br-eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
>>>> 8: br-ex: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
>>>> state UNKNOWN
>>>> 22: phy-br-eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
>>>> pfifo_fast state UP qlen 1000
>>>> 23: int-br-eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
>>>> pfifo_fast state UP qlen 1000
>>>>
>>>>  I gave br-ex an IP and UP'ed it manually.  I assume this is correct.
>>>>  By I honestly don't know.
>>>>
>>>>  Thanks.
>>>>
>>>>
>>>>
>>>>
>>>> On Fri, Feb 15, 2013 at 6:54 PM, Greg Chavez <greg.chavez@xxxxxxxxx>wrote:
>>>>
>>>>>
>>>>>  Sigh.  So I abandoned RHEL 6.3, rekicked my systems and set up the
>>>>> scale-ready installation described in these instructions:
>>>>>
>>>>>
>>>>> https://github.com/mseknibilel/OpenStack-Folsom-Install-guide/blob/master/OpenStack_Folsom_Install_Guide_WebVersion.rst
>>>>>
>>>>>  Basically:
>>>>>
>>>>>  (o) controller node on a mgmt and public net
>>>>> (o) network node (quantum and openvs) on a mgmt, net-config, and
>>>>> public net
>>>>>  (o) compute node is on a mgmt and net-config net
>>>>>
>>>>>  Took me just over an hour and ran into only a few easily-fixed speed
>>>>> bumps.  But the VM networks are totally non-functioning.  VMs launch but no
>>>>> network traffic can go in or out.
>>>>>
>>>>>  I'm particularly befuddled by these problems:
>>>>>
>>>>>  ( 1 ) This error in nova-compute:
>>>>>
>>>>>  ERROR nova.network.quantumv2 [-] _get_auth_token() failed
>>>>>
>>>>>  ( 2 ) No NAT rules on the compute node, which probably explains why
>>>>> the VMs complain about not finding a network or being able to get metadata
>>>>> from 169.254.169.254.
>>>>>
>>>>>  root@kvm-cs-sn-10i:~# iptables -t nat -S
>>>>> -P PREROUTING ACCEPT
>>>>> -P INPUT ACCEPT
>>>>> -P OUTPUT ACCEPT
>>>>> -P POSTROUTING ACCEPT
>>>>> -N nova-api-metadat-OUTPUT
>>>>> -N nova-api-metadat-POSTROUTING
>>>>> -N nova-api-metadat-PREROUTING
>>>>> -N nova-api-metadat-float-snat
>>>>> -N nova-api-metadat-snat
>>>>> -N nova-compute-OUTPUT
>>>>> -N nova-compute-POSTROUTING
>>>>> -N nova-compute-PREROUTING
>>>>> -N nova-compute-float-snat
>>>>> -N nova-compute-snat
>>>>> -N nova-postrouting-bottom
>>>>> -A PREROUTING -j nova-api-metadat-PREROUTING
>>>>> -A PREROUTING -j nova-compute-PREROUTING
>>>>> -A OUTPUT -j nova-api-metadat-OUTPUT
>>>>> -A OUTPUT -j nova-compute-OUTPUT
>>>>> -A POSTROUTING -j nova-api-metadat-POSTROUTING
>>>>> -A POSTROUTING -j nova-compute-POSTROUTING
>>>>> -A POSTROUTING -j nova-postrouting-bottom
>>>>> -A nova-api-metadat-snat -j nova-api-metadat-float-snat
>>>>> -A nova-compute-snat -j nova-compute-float-snat
>>>>> -A nova-postrouting-bottom -j nova-api-metadat-snat
>>>>> -A nova-postrouting-bottom -j nova-compute-snat
>>>>>
>>>>>  (3) A lastly, no default secgroup rules, whose function governs...
>>>>> what exactly?  Connections to the VM's public or private IPs?  I guess I'm
>>>>> just not sure if this is relevant to my overall problem of ZERO VM network
>>>>> connectivity.
>>>>>
>>>>>  I seek guidance please.  Thanks.
>>>>>
>>>>>
>>>>>  --
>>>>> \*..+.-
>>>>> --Greg Chavez
>>>>> +//..;};
>>>>>
>>>>
>>>>
>>>>
>>>>  --
>>>> \*..+.-
>>>> --Greg Chavez
>>>> +//..;};
>>>>
>>>
>>>
>>>
>>>  --
>>> \*..+.-
>>> --Greg Chavez
>>> +//..;};
>>>
>>>
>>> _______________________________________________
>>> Mailing list: https://launchpad.net/~openstack
>>> Post to     : openstack@xxxxxxxxxxxxxxxxxxx
>>> Unsubscribe : https://launchpad.net/~openstack
>>> More help   : https://help.launchpad.net/ListHelp
>>>
>>>
>>>
>>
>>
>> --
>> \*..+.-
>> --Greg Chavez
>> +//..;};
>>
>>
>>
>> --
>> \*..+.-
>> --Greg Chavez
>> +//..;};
>>
>> _______________________________________________
>> Mailing list: https://launchpad.net/~openstack
>> Post to     : openstack@xxxxxxxxxxxxxxxxxxx
>> Unsubscribe : https://launchpad.net/~openstack
>> More help   : https://help.launchpad.net/ListHelp
>>
>>
>
>
> --
> Thanks & Regards
> --Anil Kumar Vishnoi
>



-- 
\*..+.-
--Greg Chavez
+//..;};



-- 
\*..+.-
--Greg Chavez
+//..;};

Follow ups

References