← Back to team overview

openstack team mailing list archive

Re: Initial quantum network state broken

 

Hi Greg,
Sorry to hear you woes. I agree with you that setting things up is challeniging and sometimes problematic. I would suggest a number of things: 1. Give devstack a bash. This is very helpful and useful to try and understand how everything fits and works together. www.devstack.org 2. A few months ago we did a test day with Fedora for folsom. There are Quantum commands and setup details (you can use these on other distributions too) - https://fedoraproject.org/wiki/QA:Testcase_Quantum_V2#Setup
Hope that that helps.
Thanks
Gary


On 02/19/2013 01:55 AM, Greg Chavez wrote:
Third time I'm replying to my own message. It seems like the initial network state is a problem for many first time openstackers. Surely somewhere would be well to assist me. I'm running out of time to make this work. Thanks.


On Sun, Feb 17, 2013 at 3:08 AM, Greg Chavez <greg.chavez@xxxxxxxxx <mailto:greg.chavez@xxxxxxxxx>> wrote:

    I'm replying to my own message because I'm desperate.  My network
    situation is a mess.  I need to add this as well: my bridge
    interfaces are all down.  On my compute node:

    root@kvm-cs-sn-10i:/var/lib/nova/instances/instance-00000005# ip
    addr show | grep ^[0-9]
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
    2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state
    UP qlen 1000
    3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state
    UP qlen 1000
    4: eth2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen
    1000
    5: eth3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen
    1000
    9: br-int: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
    10: br-eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
    13: phy-br-eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
    pfifo_fast state UP qlen 1000
    14: int-br-eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
    pfifo_fast state UP qlen 1000
    15: qbre56c5d9e-b6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500
    qdisc noqueue state UP
    16: qvoe56c5d9e-b6: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu
    1500 qdisc pfifo_fast state UP qlen 1000
    17: qvbe56c5d9e-b6: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu
    1500 qdisc pfifo_fast master qbre56c5d9e-b6 state UP qlen 1000
    19: qbrb805a9c9-11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500
    qdisc noqueue state UP
    20: qvob805a9c9-11: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu
    1500 qdisc pfifo_fast state UP qlen 1000
    21: qvbb805a9c9-11: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu
    1500 qdisc pfifo_fast master qbrb805a9c9-11 state UP qlen 1000
    34: qbr2b23c51f-02: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500
    qdisc noqueue state UP
    35: qvo2b23c51f-02: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu
    1500 qdisc pfifo_fast state UP qlen 1000
    36: qvb2b23c51f-02: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu
    1500 qdisc pfifo_fast master qbr2b23c51f-02 state UP qlen 1000
    37: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
    pfifo_fast master qbr2b23c51f-02 state UNKNOWN qlen 500

    And on my network node:

    root@knet-cs-gen-01i:~# ip addr show | grep ^[0-9]
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
    2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state
    UP qlen 1000
    3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state
    UP qlen 1000
    4: eth2: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc
    mq state UP qlen 1000
    5: eth3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen
    1000
    6: br-int: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
    7: br-eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
    8: br-ex: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
    state UNKNOWN
    22: phy-br-eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
    pfifo_fast state UP qlen 1000
    23: int-br-eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
    pfifo_fast state UP qlen 1000

    I gave br-ex an IP and UP'ed it manually.  I assume this is
    correct.  By I honestly don't know.

    Thanks.




    On Fri, Feb 15, 2013 at 6:54 PM, Greg Chavez
    <greg.chavez@xxxxxxxxx <mailto:greg.chavez@xxxxxxxxx>> wrote:


        Sigh.  So I abandoned RHEL 6.3, rekicked my systems and set up
        the scale-ready installation described in these instructions:

        https://github.com/mseknibilel/OpenStack-Folsom-Install-guide/blob/master/OpenStack_Folsom_Install_Guide_WebVersion.rst

        Basically:

        (o) controller node on a mgmt and public net
        (o) network node (quantum and openvs) on a mgmt, net-config,
        and public net
        (o) compute node is on a mgmt and net-config net

        Took me just over an hour and ran into only a few easily-fixed
        speed bumps.  But the VM networks are totally non-functioning.
         VMs launch but no network traffic can go in or out.

        I'm particularly befuddled by these problems:

        ( 1 ) This error in nova-compute:

        ERROR nova.network.quantumv2 [-] _get_auth_token() failed

        ( 2 ) No NAT rules on the compute node, which probably
        explains why the VMs complain about not finding a network or
        being able to get metadata from 169.254.169.254.

        root@kvm-cs-sn-10i:~# iptables -t nat -S
        -P PREROUTING ACCEPT
        -P INPUT ACCEPT
        -P OUTPUT ACCEPT
        -P POSTROUTING ACCEPT
        -N nova-api-metadat-OUTPUT
        -N nova-api-metadat-POSTROUTING
        -N nova-api-metadat-PREROUTING
        -N nova-api-metadat-float-snat
        -N nova-api-metadat-snat
        -N nova-compute-OUTPUT
        -N nova-compute-POSTROUTING
        -N nova-compute-PREROUTING
        -N nova-compute-float-snat
        -N nova-compute-snat
        -N nova-postrouting-bottom
        -A PREROUTING -j nova-api-metadat-PREROUTING
        -A PREROUTING -j nova-compute-PREROUTING
        -A OUTPUT -j nova-api-metadat-OUTPUT
        -A OUTPUT -j nova-compute-OUTPUT
        -A POSTROUTING -j nova-api-metadat-POSTROUTING
        -A POSTROUTING -j nova-compute-POSTROUTING
        -A POSTROUTING -j nova-postrouting-bottom
        -A nova-api-metadat-snat -j nova-api-metadat-float-snat
        -A nova-compute-snat -j nova-compute-float-snat
        -A nova-postrouting-bottom -j nova-api-metadat-snat
        -A nova-postrouting-bottom -j nova-compute-snat

        (3) A lastly, no default secgroup rules, whose function
        governs... what exactly?  Connections to the VM's public or
        private IPs?  I guess I'm just not sure if this is relevant to
        my overall problem of ZERO VM network connectivity.

        I seek guidance please.  Thanks.


-- \*..+.-
        --Greg Chavez
        +//..;};




-- \*..+.-
    --Greg Chavez
    +//..;};




--
\*..+.-
--Greg Chavez
+//..;};


_______________________________________________
Mailing list: https://launchpad.net/~openstack
Post to     : openstack@xxxxxxxxxxxxxxxxxxx
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


References