← Back to team overview

openstack team mailing list archive

Re: Fwd: Initial quantum network state broken

 

> > For example, even after rebooting the network node after installation,
> the
> > br-int, br-ex, and br-eth1 interfaces were all down.  And I wasn't able
> to
> > troubleshoot the VLAN tagging until added those interfaces to
> > /etc/network/interfaces and restarted networking.
>
> Yes, Quantum does not persist the configuration for you. Actually,
> Quantum is polite and does not want to mess with your host's network
> configuration.
> Indeed the typical workflow is that the admin configures those
> bridges, and then tells quantum to use them.
>
> >
> > The symptoms now are:
> >
> > * Whereas before I was able to ping the tenant's router and dhcp IPs from
> > the controller, I can't now.
>
> The controller usually does not have a route to the tenant network.
> For instance it might be running in some management network (say
> 10.127.1.0/24) whereas your tenant network is 10.0.0.0/24.
>
>
The instructions I followed had me set a route for the tenant network on
the controller

root@kcon-cs-gen-01i:~# quantum port-list -c fixed_ips -c device_owner |
grep router_gateway | awk -F '"' '{ print $8; }'
10.21.166.1

root@kcon-cs-gen-01i:~# netstat -rn | grep 166
192.168.1.0     10.21.166.1     255.255.255.0   UG        0 0          0
eth0

root@kcon-cs-gen-01i:~# ping 192.168.1.3
PING 192.168.1.3 (192.168.1.3) 56(84) bytes of data.
>From 10.21.164.75 icmp_seq=1 Destination Host Unreachable
>From 10.21.164.75 icmp_seq=2 Destination Host Unreachable
>From 10.21.164.75 icmp_seq=3 Destination Host Unreachable

Don't even see the icmp packets on eth0.  Same result with 192.168.1.1 and
.2.  Those *did* work before I reconfigured the physical interfaces as
ports of the OVS bridges.


> * The VNC console inside Horizon fails to connect (it could before).
>
> Unfortunately I do not have an explanation for that. It might be
> related to the fact that you're using a provider network, and hence
> there might be an underlying bug.
> I am unfortunately unable to setup a repro environment now.


I think I can set this one aside for now.  Just thought it might be a
symptom of the main problem: can't ping/ssh to or from VM.


> > * External VNC connections work, however.  Once inside, I can see that
> the
> > VM interface is now configured with 192.168.1.3/24.  It can ping the
> DHCP
> > server (192.168.1.2) but not the default router (192.168.1.1).
>
> Can you check the interface is up and running on the l3-agent. You
> should see a qr-<partofrouteruuid> iface configured with 1ip
> 92.168.1.1
>
>
Ah-ha!  I see this in the l3 logs after I restart the l3-agent:

Stderr: 'Cannot find device "qg-81523176-e1"\n'
2013-02-20 17:15:57    ERROR [quantum.agent.l3_agent] Error running l3_nat
daemon_loop

But it's right here:

root@knet-cs-gen-01i:/var/log/quantum# ovs-vsctl show
7dc4a669-a330-4c5f-a6f1-ab9e35b82685
    Bridge br-int
        Port br-int
            Interface br-int
                type: internal
        Port "qr-973ae179-06"
            Interface "qr-973ae179-06"
                type: internal
        Port "int-br-eth1"
            Interface "int-br-eth1"
        Port "tapca2d1ff4-7b"
            tag: 1
            Interface "tapca2d1ff4-7b"
                type: internal
    Bridge br-ex
        Port br-ex
            Interface br-ex
                type: internal
        Port "qg-81523176-e1"
            Interface "qg-81523176-e1"
                type: internal
        Port "eth2"
            Interface "eth2"
    Bridge "br-eth1"
        Port "br-eth1"
            Interface "br-eth1"
                type: internal
        Port "eth1"
            Interface "eth1"
        Port "phy-br-eth1"
            Interface "phy-br-eth1"

Hrm......


> >
> > I have all the salient configs paste-binned here:
> > http://pastebin.com/ZZAuaH4u
> >
> > Suggestions?  I'm tempted to re-install at this point, I've mucked around
> > with manual network changes so much.
> >
> > Thanks again!
> >
> > On Wed, Feb 20, 2013 at 1:44 PM, Anil Vishnoi <vishnoianil@xxxxxxxxx>
> wrote:
> >>
> >> so if the packet is going out from compute node -eth1 interface and not
> >> reaching network node eth1 interface, then the only element in between
> these
> >> two interface is physical switch. Both the switch ports should be
> trunked,
> >> so that they allow all the vlan traffic on both the ports where network
> node
> >> and compute node are connected.
> >>
> >> Process for VLAN trunking depends on the switch vendor. AFAIK in cisco
> >> switch all the ports are by default trunk-ed, but thats not the case
> with
> >> all vendors. You might want to re-check the switch configuration and
> test
> >> whether its really passing the tagged traffic or not.
> >>
> >>
> >> On Wed, Feb 20, 2013 at 11:54 PM, Greg Chavez <greg.chavez@xxxxxxxxx>
> >> wrote:
> >>>
> >>>
> >>> I'm seeing three BOOTP/DHCP packets on the eth1 interface of the
> compute
> >>> node, but it doesn't make it to the network node.  So I may have a VLAN
> >>> tagging issue.
> >>>
> >>> The segmentation id for the VM network is 1024 (same as what's in the
> >>> github instructions).  The segmentation id for the public network is
> 3001.
> >>>
> >>> root@kcon-cs-gen-01i:/etc/init.d# quantum net-show
> >>> 654a49a3-f042-45eb-a937-0dcd6fcaa84c | grep seg
> >>> | provider:segmentation_id  | 3001                                 |
> >>>
> >>> root@kcon-cs-gen-01i:/etc/init.d# quantum net-show
> >>> c9c6a895-8bc1-4319-a207-30422d0d1a27 | grep seg
> >>> | provider:segmentation_id  | 1024
> >>>
> >>> The physical switch ports for the eth1 interfaces of the compute and
> >>> network node are set to trunk and are allowing both 3001 and 1024.
> >>>
> >>> Here are the interfaces on the compute node:
> >>>
> >>> root@kvm-cs-sn-10i:/etc/network/if-up.d# ip add show | egrep ^[0-9]+
> >>> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
> >>> 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP
> >>> qlen 1000
> >>> 3: eth1: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc mq
> >>> state UP qlen 1000
> >>> 4: eth2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000
> >>> 5: eth3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000
> >>> 6: br-eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
> >>> state UNKNOWN
> >>> 12: qvo5334a0cb-64: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500
> >>> qdisc pfifo_fast state UP qlen 1000
> >>> 13: qvb5334a0cb-64: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500
> >>> qdisc pfifo_fast state UP qlen 1000
> >>> 29: br-int: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc
> >>> noqueue state UNKNOWN
> >>> 37: phy-br-eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
> >>> pfifo_fast state UP qlen 1000
> >>> 38: int-br-eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
> >>> pfifo_fast state UP qlen 1000
> >>> 39: qbr9cf85869-88: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
> >>> noqueue state UP
> >>> 40: qvo9cf85869-88: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500
> >>> qdisc pfifo_fast state UP qlen 1000
> >>> 41: qvb9cf85869-88: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500
> >>> qdisc pfifo_fast master qbr9cf85869-88 state UP qlen 1000
> >>> 47: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
> >>> master qbr9cf85869-88 state UNKNOWN qlen 500
> >>>
> >>> I set the interfaces file up like this (excluding eth0):
> >>>
> >>> auto br-eth1
> >>> iface br-eth1 inet static
> >>> address 192.168.239.110
> >>> netmask 255.255.255.0
> >>> bridge_ports eth1
> >>>
> >>> auto eth1
> >>>  iface eth1 inet manual
> >>>        up ifconfig $IFACE 0.0.0.0 up
> >>>        up ip link set $IFACE promisc on
> >>>        down ip link set $IFACE promisc off
> >>>        down ifconfig $IFACE down
> >>>
> >>> auto br-int
> >>>  iface br-int inet manual
> >>>        up ifconfig $IFACE 0.0.0.0 up
> >>>        up ip link set $IFACE promisc on
> >>>        down ip link set $IFACE promisc off
> >>>        down ifconfig $IFACE down
> >>>
> >>> But check this out.  On the network node:
> >>>
> >>> root@knet-cs-gen-01i:~# ip addr show | egrep ^[0-9]+
> >>> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
> >>> 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP
> >>> qlen 1000
> >>> 3: eth1: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc mq
> >>> state UP qlen 1000
> >>> 4: eth2: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc mq
> >>> master br-ex state UP qlen 1000
> >>> 5: eth3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000
> >>> 6: br-ex: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
> state
> >>> UP
> >>> 7: br-eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
> >>> 8: br-int: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc
> >>> noqueue state UNKNOWN
> >>> 9: tapca2d1ff4-7b: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
> >>> 12: phy-br-eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
> >>> pfifo_fast state UP qlen 1000
> >>> 13: int-br-eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
> >>> pfifo_fast state UP qlen 1000
> >>>
> >>> What???  I have br-eth1, br-ex, and br-int configured the same way in
> >>> /etc/network/interfaces.  So i run "ifconfig eth1 up", restart the
> agents...
> >>> and I still don't see any DHCP packets hitting the network node...
> >>> .
> >>>
> >>>
> >>>
> >>> On Wed, Feb 20, 2013 at 12:42 PM, Anil Vishnoi <vishnoianil@xxxxxxxxx>
> >>> wrote:
> >>>>
> >>>> now can you  dump packet at the eth1 and see if DHCP traffic is
> leaving
> >>>> your host or not.
> >>>>
> >>>> tcpdump -nnei eth1 |grep DHCP
> >>>>
> >>>> And also dump the packets at the controller node, and see if you see
> the
> >>>> same DHCP packet there.
> >>>>
> >>>> Just start these tcpdump on compute and controller node and restart
> your
> >>>> VM instance.
> >>>>
> >>>>
> >>>> On Wed, Feb 20, 2013 at 11:07 PM, Greg Chavez <greg.chavez@xxxxxxxxx>
> >>>> wrote:
> >>>>>
> >>>>> Yeah, I was wondering about that!  I was expecting to see it too.
>  I've
> >>>>> terminated that VM and launched another one.  Here's the output of
> those
> >>>>> commands again, from the compute node:
> >>>>>
> >>>>> root@kvm-cs-sn-10i:~# ovs-vsctl show
> >>>>> 9d9f7949-2b80-40c8-a9e0-6a116200ed96
> >>>>>     Bridge br-int
> >>>>>         Port "qvo9cf85869-88"
> >>>>>             tag: 1
> >>>>>             Interface "qvo9cf85869-88"
> >>>>>         Port br-int
> >>>>>             Interface br-int
> >>>>>                 type: internal
> >>>>>         Port "int-br-eth1"
> >>>>>             Interface "int-br-eth1"
> >>>>>     Bridge "br-eth1"
> >>>>>         Port "br-eth1"
> >>>>>             Interface "br-eth1"
> >>>>>                 type: internal
> >>>>>         Port "phy-br-eth1"
> >>>>>             Interface "phy-br-eth1"
> >>>>>         Port "eth1"
> >>>>>             Interface "eth1"
> >>>>>     ovs_version: "1.4.3"
> >>>>>
> >>>>> root@kvm-cs-sn-10i:~# ovs-dpctl show
> >>>>> system@br-eth1:
> >>>>> lookups: hit:5497 missed:25267 lost:0
> >>>>> flows: 1
> >>>>> port 0: br-eth1 (internal)
> >>>>> port 1: eth1
> >>>>> port 7: phy-br-eth1
> >>>>> system@br-int:
> >>>>> lookups: hit:3270 missed:14993 lost:0
> >>>>> flows: 1
> >>>>> port 0: br-int (internal)
> >>>>> port 3: int-br-eth1
> >>>>> port 4: qvo9cf85869-88
> >>>>>
> >>>>> root@kvm-cs-sn-10i:~# brctl show
> >>>>> bridge name bridge id STP enabled interfaces
> >>>>> br-eth1 0000.bc305befedd1 no eth1
> >>>>> phy-br-eth1
> >>>>> br-int 0000.8ae31e5f7941 no int-br-eth1
> >>>>> qvo9cf85869-88
> >>>>> qbr9cf85869-88 8000.aeae57c4b763 no qvb9cf85869-88
> >>>>> vnet0
> >>>>>
> >>>>> root@kvm-cs-sn-10i:~# ovs-ofctl dump-flows br-int
> >>>>> NXST_FLOW reply (xid=0x4):
> >>>>>  cookie=0x0, duration=2876.663s, table=0, n_packets=641,
> >>>>> n_bytes=122784, priority=2,in_port=3 actions=drop
> >>>>>  cookie=0x0, duration=1097.004s, table=0, n_packets=0, n_bytes=0,
> >>>>> priority=3,in_port=3,dl_vlan=1024 actions=mod_vlan_vid:1,NORMAL
> >>>>>  cookie=0x0, duration=2877.036s, table=0, n_packets=16, n_bytes=1980,
> >>>>> priority=1 actions=NORMAL
> >>>>>
> >>>>> root@kvm-cs-sn-10i:~# ovs-ofctl dump-flows br-eth1
> >>>>> NXST_FLOW reply (xid=0x4):
> >>>>>  cookie=0x0, duration=2878.446s, table=0, n_packets=11, n_bytes=854,
> >>>>> priority=2,in_port=7 actions=drop
> >>>>>  cookie=0x0, duration=1098.842s, table=0, n_packets=11, n_bytes=1622,
> >>>>> priority=4,in_port=7,dl_vlan=1 actions=mod_vlan_vid:1024,NORMAL
> >>>>>  cookie=0x0, duration=2878.788s, table=0, n_packets=635,
> >>>>> n_bytes=122320, priority=1 actions=NORMAL
> >>>>>
> >>>>> And for good measure here's some info from the network node:
> >>>>>
> >>>>> root@knet-cs-gen-01i:~# ip netns exec
> >>>>> qdhcp-c9c6a895-8bc1-4319-a207-30422d0d1a27 netstat -rn
> >>>>> Kernel IP routing table
> >>>>> Destination     Gateway         Genmask         Flags   MSS Window
> >>>>> irtt Iface
> >>>>> 192.168.1.0     0.0.0.0         255.255.255.0   U         0 0
> >>>>> 0 tapca2d1ff4-7b
> >>>>>
> >>>>> root@knet-cs-gen-01i:~# ip netns exec
> >>>>> qrouter-ddd535ad-debb-4810-bc10-f419f105c959 netstat -rn
> >>>>> Kernel IP routing table
> >>>>> Destination     Gateway         Genmask         Flags   MSS Window
> >>>>> irtt Iface
> >>>>> 0.0.0.0         10.21.164.1     0.0.0.0         UG        0 0
> >>>>> 0 qg-81523176-e1
> >>>>> 10.21.164.0     0.0.0.0         255.255.252.0   U         0 0
> >>>>> 0 qg-81523176-e1
> >>>>> 192.168.1.0     0.0.0.0         255.255.255.0   U         0 0
> >>>>> 0 qr-973ae179-06
> >>>>>
> >>>>>
> >>>>> And yes, I have nets set up. From the controller node:
> >>>>>
> >>>>> root@kcon-cs-gen-01i:/etc/init.d# nova --os-username=user_one
> >>>>> --os-password=user_one --os-tenant-name=project_one list | grep ACT
> >>>>> | 12c37be6-14e3-471b-aae3-750af8cfca32 | server-02 | ACTIVE |
> >>>>> net_proj_one=192.168.1.3 |
> >>>>>
> >>>>> root@kcon-cs-gen-01i:/etc/init.d# quantum net-list | grep _
> >>>>> | 654a49a3-f042-45eb-a937-0dcd6fcaa84c | ext_net      |
> >>>>> 842a2baa-829c-4daa-8d3d-77dace5fba86 |
> >>>>> | c9c6a895-8bc1-4319-a207-30422d0d1a27 | net_proj_one |
> >>>>> 4ecef580-27ad-44f1-851d-c2c4cd64d323 |
> >>>>>
> >>>>> Any light bulbs appearing over your head?  :)
> >>>>>
> >>>>>
> >>>>> On Wed, Feb 20, 2013 at 11:51 AM, Anil Vishnoi <
> vishnoianil@xxxxxxxxx>
> >>>>> wrote:
> >>>>>>
> >>>>>> I am assuming this output is from compute node, but i don't see any
> >>>>>> tapX device attached to your "br-int" bridge.
> >>>>>>
> >>>>>> If you are running VM instance on this compute node then it should
> be
> >>>>>> connected to br-int through tapX device. Quantum automatically does
> that if
> >>>>>> you spawn VM and specify network to which you want to connect this
> VM.
> >>>>>>
> >>>>>> If you fire following command
> >>>>>>
> >>>>>> ~# quantum net-list
> >>>>>>
> >>>>>> do you see any network entry in the output ?
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>> On Wed, Feb 20, 2013 at 10:05 PM, Greg Chavez <
> greg.chavez@xxxxxxxxx>
> >>>>>> wrote:
> >>>>>>>
> >>>>>>>
> >>>>>>> Here's the last command output you asked for:
> >>>>>>>
> >>>>>>> root@kvm-cs-sn-10i:/var/lib/nova/instances# ovs-ofctl dump-flows
> >>>>>>> br-eth1
> >>>>>>> NXST_FLOW reply (xid=0x4):
> >>>>>>>  cookie=0x0, duration=78793.694s, table=0, n_packets=6,
> n_bytes=468,
> >>>>>>> priority=2,in_port=6 actions=drop
> >>>>>>>  cookie=0x0, duration=78794.033s, table=0, n_packets=17355,
> >>>>>>> n_bytes=3335788, priority=1 actions=NORMAL
> >>>>>>>
> >>>>>>>
> >>>>>>> On Wed, Feb 20, 2013 at 10:57 AM, Greg Chavez <
> greg.chavez@xxxxxxxxx>
> >>>>>>> wrote:
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> Hey Anil, thanks for responding.  Here's the output:
> >>>>>>>>
> >>>>>>>> root@kvm-cs-sn-10i:/var/lib/nova/instances# ovs-vsctl show
> >>>>>>>> 9d9f7949-2b80-40c8-a9e0-6a116200ed96
> >>>>>>>>     Bridge br-int
> >>>>>>>>         Port br-int
> >>>>>>>>             Interface br-int
> >>>>>>>>                 type: internal
> >>>>>>>>         Port "int-br-eth1"
> >>>>>>>>             Interface "int-br-eth1"
> >>>>>>>>     Bridge "br-eth1"
> >>>>>>>>         Port "phy-br-eth1"
> >>>>>>>>             Interface "phy-br-eth1"
> >>>>>>>>         Port "br-eth1"
> >>>>>>>>             Interface "br-eth1"
> >>>>>>>>                 type: internal
> >>>>>>>>         Port "eth1"
> >>>>>>>>             Interface "eth1"
> >>>>>>>>     ovs_version: "1.4.3"
> >>>>>>>>
> >>>>>>>> root@kvm-cs-sn-10i:/var/lib/nova/instances# ovs-dpctl show
> >>>>>>>> system@br-eth1:
> >>>>>>>> lookups: hit:5227 missed:24022 lost:0
> >>>>>>>> flows: 1
> >>>>>>>> port 0: br-eth1 (internal)
> >>>>>>>> port 1: eth1
> >>>>>>>> port 6: phy-br-eth1
> >>>>>>>> system@br-int:
> >>>>>>>> lookups: hit:2994 missed:13754 lost:0
> >>>>>>>> flows: 1
> >>>>>>>> port 0: br-int (internal)
> >>>>>>>> port 2: int-br-eth1
> >>>>>>>>
> >>>>>>>> root@kvm-cs-sn-10i:/var/lib/nova/instances# brctl show
> >>>>>>>> bridge name bridge id STP enabled interfaces
> >>>>>>>> br-eth1 0000.bc305befedd1 no eth1
> >>>>>>>> phy-br-eth1
> >>>>>>>> br-int 0000.8ae31e5f7941 no int-br-eth1
> >>>>>>>> qbr5334a0cb-64 8000.76fb293fe9cf no qvb5334a0cb-64
> >>>>>>>> vnet0
> >>>>>>>>
> >>>>>>>> root@kvm-cs-sn-10i:/var/lib/nova/instances# ovs-ofctl dump-flows
> >>>>>>>> br-int
> >>>>>>>> NXST_FLOW reply (xid=0x4):
> >>>>>>>>  cookie=0x0, duration=75251.156s, table=0, n_packets=16581,
> >>>>>>>> n_bytes=3186436, priority=2,in_port=2 actions=drop
> >>>>>>>>  cookie=0x0, duration=75251.527s, table=0, n_packets=0, n_bytes=0,
> >>>>>>>> priority=1 actions=NORMAL
> >>>>>>>>
> >>>>>>>> Thanks!  I have until tomorrow to get this working, then my boss
> has
> >>>>>>>> mandating that I try Cloudstack.  Argh, but I'm so close!
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> On Wed, Feb 20, 2013 at 10:29 AM, Anil Vishnoi
> >>>>>>>> <vishnoianil@xxxxxxxxx> wrote:
> >>>>>>>>>
> >>>>>>>>> Hi Greg,
> >>>>>>>>>
> >>>>>>>>> Can you paste the output of following command from your compute
> >>>>>>>>> node.
> >>>>>>>>>
> >>>>>>>>> ovs-vsctl show
> >>>>>>>>> ovs-dpctl show
> >>>>>>>>> brctl show
> >>>>>>>>>
> >>>>>>>>> ovs-ofctl dump-flows br-int
> >>>>>>>>> ovs-ofctl dump-flows br-eth1
> >>>>>>>>>
> >>>>>>>>> Because i think first issue we need to resolve here is why DHCP
> >>>>>>>>> packet is not leaving your compute host.
> >>>>>>>>>
> >>>>>>>>> Anil
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>> On Wed, Feb 20, 2013 at 6:48 AM, Greg Chavez
> >>>>>>>>> <greg.chavez@xxxxxxxxx> wrote:
> >>>>>>>>>>
> >>>>>>>>>>
> >>>>>>>>>>
> >>>>>>>>>> From my perspective, it seems that the OVS bridges are not being
> >>>>>>>>>> brought up correctly.  As you can see in my earlier post, the
> integration
> >>>>>>>>>> bridge (int) and the physical interface bridges (br-ex and
> br-eth1) are
> >>>>>>>>>> downed.  I've tried to bring them up in promiscuous mode in the
> case of
> >>>>>>>>>> br-int, and with the physical interfaces ported to the bridge
> in the case of
> >>>>>>>>>> br-ex and br-eth1.  I've had no luck unfortunately.
> >>>>>>>>>>
> >>>>>>>>>> It seems that nothing is getting past br-int.  I can see BOOTP
> >>>>>>>>>> packets on the VM side of br-int, and I can see VTP packets on
> the physical
> >>>>>>>>>> side of br-int.  But that's where it ends.
> >>>>>>>>>>
> >>>>>>>>>> For example, when I reboot my VM, I see this:
> >>>>>>>>>>
> >>>>>>>>>> root@kvm-cs-sn-10i:/var/lib/nova/instances# tcpdump -i
> >>>>>>>>>> qvo5334a0cb-64
> >>>>>>>>>>
> >>>>>>>>>> tcpdump: WARNING: qvo5334a0cb-64: no IPv4 address assigned
> >>>>>>>>>> tcpdump: verbose output suppressed, use -v or -vv for full
> >>>>>>>>>> protocol decode
> >>>>>>>>>> listening on qvo5334a0cb-64, link-type EN10MB (Ethernet),
> capture
> >>>>>>>>>> size 65535 bytes
> >>>>>>>>>>
> >>>>>>>>>> 13:42:08.099061 IP 0.0.0.0.bootpc > 255.255.255.255.bootps:
> >>>>>>>>>> BOOTP/DHCP, Request from fa:16:3e:06:48:09 (oui Unknown),
> length 280
> >>>>>>>>>> 13:42:08.101675 IP6 :: > ff02::16: HBH ICMP6, multicast listener
> >>>>>>>>>> report v2, 1 group record(s), length 28
> >>>>>>>>>> 13:42:08.161728 IP6 :: > ff02::1:ff06:4809: ICMP6, neighbor
> >>>>>>>>>> solicitation, who has fe80::f816:3eff:fe06:4809, length 24
> >>>>>>>>>> 13:42:08.373745 IP6 :: > ff02::16: HBH ICMP6, multicast listener
> >>>>>>>>>> report v2, 1 group record(s), length 28
> >>>>>>>>>> 13:42:11.102528 IP 0.0.0.0.bootpc > 255.255.255.255.bootps:
> >>>>>>>>>> BOOTP/DHCP, Request from fa:16:3e:06:48:09 (oui Unknown),
> length 280
> >>>>>>>>>> 13:42:14.105850 IP 0.0.0.0.bootpc > 255.255.255.255.bootps:
> >>>>>>>>>> BOOTP/DHCP, Request from fa:16:3e:06:48:09 (oui Unknown),
> length 280
> >>>>>>>>>>
> >>>>>>>>>> But that's as far as it goes.  The dhcp agent never get this.
> >>>>>>>>>>
> >>>>>>>>>> I"ve tried deleting and recreating the bridges, rebooting the
> >>>>>>>>>> systems, but nothing seems to work.  Maybe it's just the right
> combination
> >>>>>>>>>> of things.  I don't know.
> >>>>>>>>>>
> >>>>>>>>>> Help!
> >>>>>>>>>>
> >>>>>>>>>>
> >>>>>>>>>> On Tue, Feb 19, 2013 at 5:23 AM, Sylvain Bauza
> >>>>>>>>>> <sylvain.bauza@xxxxxxxxxxxx> wrote:
> >>>>>>>>>>>
> >>>>>>>>>>> Hi Greg,
> >>>>>>>>>>>
> >>>>>>>>>>> I did have trouble with DHCP assignation (see my previous post
> in
> >>>>>>>>>>> this list), which was being fixed by deleting ovs bridges on
> network node,
> >>>>>>>>>>> recreating them and restarting OVS plugin and L3/DHCP agents
> (which were all
> >>>>>>>>>>> on the same physical node).
> >>>>>>>>>>> Maybe it helps.
> >>>>>>>>>>>
> >>>>>>>>>>> Anyway, when DHCP'ing from your VM (asking for an IP), could
> you
> >>>>>>>>>>> please tcpdump :
> >>>>>>>>>>> 1. your virtual network interface on compute node
> >>>>>>>>>>> 2. your physical network interface on compute node
> >>>>>>>>>>> 3. your physical network interface on network node
> >>>>>>>>>>>
> >>>>>>>>>>> and see BOOTP/DHCP packets ?
> >>>>>>>>>>> On the physical layer, you should see GRE packets (provided you
> >>>>>>>>>>> correctly followed the mentioned guide) encapsulating your
> BOOTP/DHCP
> >>>>>>>>>>> packets.
> >>>>>>>>>>>
> >>>>>>>>>>> If that's OK, could you please issue the below commands (on the
> >>>>>>>>>>> network node) :
> >>>>>>>>>>>  - brctl show
> >>>>>>>>>>>  - ip a
> >>>>>>>>>>>  - ovs-vsctl show
> >>>>>>>>>>>  - route -n
> >>>>>>>>>>>
> >>>>>>>>>>> Thanks,
> >>>>>>>>>>> -Sylvain
> >>>>>>>>>>>
> >>>>>>>>>>> Le 19/02/2013 00:55, Greg Chavez a écrit :
> >>>>>>>>>>>
> >>>>>>>>>>> Third time I'm replying to my own message.  It seems like the
> >>>>>>>>>>> initial network state is a problem for many first time
> openstackers.  Surely
> >>>>>>>>>>> somewhere would be well to assist me.  I'm running out of time
> to make this
> >>>>>>>>>>> work.  Thanks.
> >>>>>>>>>>>
> >>>>>>>>>>>
> >>>>>>>>>>> On Sun, Feb 17, 2013 at 3:08 AM, Greg Chavez
> >>>>>>>>>>> <greg.chavez@xxxxxxxxx> wrote:
> >>>>>>>>>>>>
> >>>>>>>>>>>> I'm replying to my own message because I'm desperate.  My
> >>>>>>>>>>>> network situation is a mess.  I need to add this as well: my
> bridge
> >>>>>>>>>>>> interfaces are all down.  On my compute node:
> >>>>>>>>>>>>
> >>>>>>>>>>>> root@kvm-cs-sn-10i:/var/lib/nova/instances/instance-00000005#
> ip
> >>>>>>>>>>>> addr show | grep ^[0-9]
> >>>>>>>>>>>> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state
> >>>>>>>>>>>> UNKNOWN
> >>>>>>>>>>>> 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq
> >>>>>>>>>>>> state UP qlen 1000
> >>>>>>>>>>>> 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq
> >>>>>>>>>>>> state UP qlen 1000
> >>>>>>>>>>>> 4: eth2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
> >>>>>>>>>>>> qlen 1000
> >>>>>>>>>>>> 5: eth3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
> >>>>>>>>>>>> qlen 1000
> >>>>>>>>>>>> 9: br-int: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state
> DOWN
> >>>>>>>>>>>> 10: br-eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state
> >>>>>>>>>>>> DOWN
> >>>>>>>>>>>> 13: phy-br-eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500
> >>>>>>>>>>>> qdisc pfifo_fast state UP qlen 1000
> >>>>>>>>>>>> 14: int-br-eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500
> >>>>>>>>>>>> qdisc pfifo_fast state UP qlen 1000
> >>>>>>>>>>>> 15: qbre56c5d9e-b6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500
> >>>>>>>>>>>> qdisc noqueue state UP
> >>>>>>>>>>>> 16: qvoe56c5d9e-b6: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP>
> >>>>>>>>>>>> mtu 1500 qdisc pfifo_fast state UP qlen 1000
> >>>>>>>>>>>> 17: qvbe56c5d9e-b6: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP>
> >>>>>>>>>>>> mtu 1500 qdisc pfifo_fast master qbre56c5d9e-b6 state UP qlen
> 1000
> >>>>>>>>>>>> 19: qbrb805a9c9-11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500
> >>>>>>>>>>>> qdisc noqueue state UP
> >>>>>>>>>>>> 20: qvob805a9c9-11: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP>
> >>>>>>>>>>>> mtu 1500 qdisc pfifo_fast state UP qlen 1000
> >>>>>>>>>>>> 21: qvbb805a9c9-11: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP>
> >>>>>>>>>>>> mtu 1500 qdisc pfifo_fast master qbrb805a9c9-11 state UP qlen
> 1000
> >>>>>>>>>>>> 34: qbr2b23c51f-02: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500
> >>>>>>>>>>>> qdisc noqueue state UP
> >>>>>>>>>>>> 35: qvo2b23c51f-02: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP>
> >>>>>>>>>>>> mtu 1500 qdisc pfifo_fast state UP qlen 1000
> >>>>>>>>>>>> 36: qvb2b23c51f-02: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP>
> >>>>>>>>>>>> mtu 1500 qdisc pfifo_fast master qbr2b23c51f-02 state UP qlen
> 1000
> >>>>>>>>>>>> 37: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
> >>>>>>>>>>>> pfifo_fast master qbr2b23c51f-02 state UNKNOWN qlen 500
> >>>>>>>>>>>>
> >>>>>>>>>>>> And on my network node:
> >>>>>>>>>>>>
> >>>>>>>>>>>> root@knet-cs-gen-01i:~# ip addr show | grep ^[0-9]
> >>>>>>>>>>>> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state
> >>>>>>>>>>>> UNKNOWN
> >>>>>>>>>>>> 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq
> >>>>>>>>>>>> state UP qlen 1000
> >>>>>>>>>>>> 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq
> >>>>>>>>>>>> state UP qlen 1000
> >>>>>>>>>>>> 4: eth2: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500
> >>>>>>>>>>>> qdisc mq state UP qlen 1000
> >>>>>>>>>>>> 5: eth3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
> >>>>>>>>>>>> qlen 1000
> >>>>>>>>>>>> 6: br-int: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state
> DOWN
> >>>>>>>>>>>> 7: br-eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state
> DOWN
> >>>>>>>>>>>> 8: br-ex: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
> >>>>>>>>>>>> noqueue state UNKNOWN
> >>>>>>>>>>>> 22: phy-br-eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500
> >>>>>>>>>>>> qdisc pfifo_fast state UP qlen 1000
> >>>>>>>>>>>> 23: int-br-eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500
> >>>>>>>>>>>> qdisc pfifo_fast state UP qlen 1000
> >>>>>>>>>>>>
> >>>>>>>>>>>> I gave br-ex an IP and UP'ed it manually.  I assume this is
> >>>>>>>>>>>> correct.  By I honestly don't know.
> >>>>>>>>>>>>
> >>>>>>>>>>>> Thanks.
> >>>>>>>>>>>>
> >>>>>>>>>>>>
> >>>>>>>>>>>>
> >>>>>>>>>>>>
> >>>>>>>>>>>> On Fri, Feb 15, 2013 at 6:54 PM, Greg Chavez
> >>>>>>>>>>>> <greg.chavez@xxxxxxxxx> wrote:
> >>>>>>>>>>>>>
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> Sigh.  So I abandoned RHEL 6.3, rekicked my systems and set
> up
> >>>>>>>>>>>>> the scale-ready installation described in these instructions:
> >>>>>>>>>>>>>
> >>>>>>>>>>>>>
> >>>>>>>>>>>>>
> https://github.com/mseknibilel/OpenStack-Folsom-Install-guide/blob/master/OpenStack_Folsom_Install_Guide_WebVersion.rst
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> Basically:
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> (o) controller node on a mgmt and public net
> >>>>>>>>>>>>> (o) network node (quantum and openvs) on a mgmt, net-config,
> >>>>>>>>>>>>> and public net
> >>>>>>>>>>>>> (o) compute node is on a mgmt and net-config net
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> Took me just over an hour and ran into only a few
> easily-fixed
> >>>>>>>>>>>>> speed bumps.  But the VM networks are totally
> non-functioning.  VMs launch
> >>>>>>>>>>>>> but no network traffic can go in or out.
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> I'm particularly befuddled by these problems:
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> ( 1 ) This error in nova-compute:
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> ERROR nova.network.quantumv2 [-] _get_auth_token() failed
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> ( 2 ) No NAT rules on the compute node, which probably
> explains
> >>>>>>>>>>>>> why the VMs complain about not finding a network or being
> able to get
> >>>>>>>>>>>>> metadata from 169.254.169.254.
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> root@kvm-cs-sn-10i:~# iptables -t nat -S
> >>>>>>>>>>>>> -P PREROUTING ACCEPT
> >>>>>>>>>>>>> -P INPUT ACCEPT
> >>>>>>>>>>>>> -P OUTPUT ACCEPT
> >>>>>>>>>>>>> -P POSTROUTING ACCEPT
> >>>>>>>>>>>>> -N nova-api-metadat-OUTPUT
> >>>>>>>>>>>>> -N nova-api-metadat-POSTROUTING
> >>>>>>>>>>>>> -N nova-api-metadat-PREROUTING
> >>>>>>>>>>>>> -N nova-api-metadat-float-snat
> >>>>>>>>>>>>> -N nova-api-metadat-snat
> >>>>>>>>>>>>> -N nova-compute-OUTPUT
> >>>>>>>>>>>>> -N nova-compute-POSTROUTING
> >>>>>>>>>>>>> -N nova-compute-PREROUTING
> >>>>>>>>>>>>> -N nova-compute-float-snat
> >>>>>>>>>>>>> -N nova-compute-snat
> >>>>>>>>>>>>> -N nova-postrouting-bottom
> >>>>>>>>>>>>> -A PREROUTING -j nova-api-metadat-PREROUTING
> >>>>>>>>>>>>> -A PREROUTING -j nova-compute-PREROUTING
> >>>>>>>>>>>>> -A OUTPUT -j nova-api-metadat-OUTPUT
> >>>>>>>>>>>>> -A OUTPUT -j nova-compute-OUTPUT
> >>>>>>>>>>>>> -A POSTROUTING -j nova-api-metadat-POSTROUTING
> >>>>>>>>>>>>> -A POSTROUTING -j nova-compute-POSTROUTING
> >>>>>>>>>>>>> -A POSTROUTING -j nova-postrouting-bottom
> >>>>>>>>>>>>> -A nova-api-metadat-snat -j nova-api-metadat-float-snat
> >>>>>>>>>>>>> -A nova-compute-snat -j nova-compute-float-snat
> >>>>>>>>>>>>> -A nova-postrouting-bottom -j nova-api-metadat-snat
> >>>>>>>>>>>>> -A nova-postrouting-bottom -j nova-compute-snat
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> (3) A lastly, no default secgroup rules, whose function
> >>>>>>>>>>>>> governs... what exactly?  Connections to the VM's public or
> private IPs?  I
> >>>>>>>>>>>>> guess I'm just not sure if this is relevant to my overall
> problem of ZERO VM
> >>>>>>>>>>>>> network connectivity.
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> I seek guidance please.  Thanks.
> >>>>>>>>>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>> _______________________________________________
> >>>>>>> Mailing list: https://launchpad.net/~openstack
> >>>>>>> Post to     : openstack@xxxxxxxxxxxxxxxxxxx
> >>>>>>> Unsubscribe : https://launchpad.net/~openstack
> >>>>>>> More help   : https://help.launchpad.net/ListHelp
> >>>>>>>
> >>>>>>
> >>>>>>
> >>>>>>
> >
> > _______________________________________________
> > Mailing list: https://launchpad.net/~openstack
> > Post to     : openstack@xxxxxxxxxxxxxxxxxxx
> > Unsubscribe : https://launchpad.net/~openstack
> > More help   : https://help.launchpad.net/ListHelp
> >
>



-- 
\*..+.-
--Greg Chavez
+//..;};

References