← Back to team overview

fuel-dev team mailing list archive

Re: VMs are not getting IP address

 

Hi Fuel Dev,

We are blocked with our testing for past 2 days. Can someone respond us for this query. We are not sure whether redeploying the environment will work.

Thanks and Regards,
Gandhi Rajan

From: Fuel-dev [mailto:fuel-dev-bounces+gmariapp=brocade.com@xxxxxxxxxxxxxxxxxxx] On Behalf Of Gandhirajan Mariappan (CW)
Sent: Monday, July 14, 2014 12:37 PM
To: 'fuel-dev@xxxxxxxxxxxxxxxxxxx'
Cc: Prakash Kaligotla; Nataraj Mylsamy (CW); Raghunath Mallina (CW); Senthil Thanganadarrosy (CW)
Subject: Re: [Fuel-dev] VMs are not getting IP address

Gentle Reminder!!

Kindly guide us in resolving this issue.
Will the problem be solved if we Redeploy the environment or is there any other workaround?

Thanks and Regards,
Gandhi Rajan

From: Gandhirajan Mariappan (CW)
Sent: Friday, July 11, 2014 2:59 PM
To: fuel-dev@xxxxxxxxxxxxxxxxxxx<mailto:fuel-dev@xxxxxxxxxxxxxxxxxxx>
Cc: Prakash Kaligotla; Nataraj Mylsamy (CW); Raghunath Mallina (CW); Senthil Thanganadarrosy (CW); Karthi Palaniappan (CW); Harsha Ramamurthy
Subject: RE: VMs are not getting IP address

Hi Fuel Dev,

Issue : IP address is not getting assigned to the created VMs

Steps followed:

1.       Connected eth0(Public), eth1(Admin PXE) and eth2(Private, Storage, Management) to VCS device and deployed environment through Fuel UI

2.       Since VCS ports connected to eth1 ports of the nodes are configured as Access VLANs and eth2 ports are configured as Trunk VLANs, we have given new connection i.e., eth3 of nodes are connected to VCS device

3.       Now we configured VCS ports which are connected to eth3 as Port-profile-port, which is the expected configuration for Brocade VCS plugin to work.

Expected Answer:
After creating VMs, IP address should be assigned automatically.

Suspect:
We suspect configurations mentioned in below section might be the problem for this issue but we are not sure about it. Kindly confirm us whether is there any other problem in the configuration or is this an issue. Also, confirm us whether br-eth3 will populate below lines if we redeploy the environment.

Port "phy-br-eth3"
   Interface "phy-br-eth3"

Configurations:
Below are the openVswitch configuration we had in our another Icehouse setup. Highlighted are the lines, which we are expecting to be present in Mirantis setup as well.

Local Ice House setup - Controller node:
[root@rhel7-41-110 ~]# ovs-vsctl show
b1ba2ad0-d40a-4193-b3d9-4b5e0196dfbd
    Bridge br-ex
        Port br-ex
            Interface br-ex
                type: internal
    Bridge "br-eth1"
        Port "eth1"
            Interface "eth1"
        Port "br-eth1"
            Interface "br-eth1"
                type: internal
        Port "phy-br-eth1"
            Interface "phy-br-eth1"
    Bridge br-int
        Port br-int
            Interface br-int
                type: internal
        Port "int-br-eth1"
            Interface "int-br-eth1"
    ovs_version: "2.0.0"

Mirantis Setup - Controller Node:
root@node-18:~# ovs-vsctl show
583bdc4f-52d0-493a-8d51-a613a4da6c9a
    Bridge "br-eth2"
        Port "br-eth2"
            Interface "br-eth2"
                type: internal
        Port "br-eth2--br-storage"
            tag: 102
            Interface "br-eth2--br-storage"
                type: patch
                options: {peer="br-storage--br-eth2"}
        Port "br-eth2--br-mgmt"
            tag: 101
            Interface "br-eth2--br-mgmt"
                type: patch
                options: {peer="br-mgmt--br-eth2"}
        Port "eth2"
            Interface "eth2"
        Port "br-eth2--br-prv"
            Interface "br-eth2--br-prv"
                type: patch
                options: {peer="br-prv--br-eth2"}
    Bridge br-mgmt
        Port "br-mgmt--br-eth2"
            Interface "br-mgmt--br-eth2"
                type: patch
                options: {peer="br-eth2--br-mgmt"}
        Port br-mgmt
            Interface br-mgmt
                type: internal
    Bridge "br-eth0"
        Port "br-eth0"
            Interface "br-eth0"
                type: internal
        Port "br-eth0--br-ex"
            trunks: [0]
            Interface "br-eth0--br-ex"
                type: patch
                options: {peer="br-ex--br-eth0"}
        Port "eth0"
            Interface "eth0"
    Bridge "br-eth1"
        Port "br-eth1--br-fw-admin"
            trunks: [0]
            Interface "br-eth1--br-fw-admin"
                type: patch
                options: {peer="br-fw-admin--br-eth1"}
        Port "eth1"
            Interface "eth1"
        Port "br-eth1"
            Interface "br-eth1"
                type: internal
    Bridge br-ex
        Port "br-ex--br-eth0"
            trunks: [0]
            Interface "br-ex--br-eth0"
                type: patch
                options: {peer="br-eth0--br-ex"}
        Port br-ex
            Interface br-ex
                type: internal
        Port "qg-83437e93-e0"
            Interface "qg-83437e93-e0"
                type: internal
        Port phy-br-ex
            Interface phy-br-ex
    Bridge "br-eth5"
        Port "br-eth5"
            Interface "br-eth5"
                type: internal
        Port "eth5"
            Interface "eth5"
    Bridge "br-eth4"
        Port "br-eth4"
            Interface "br-eth4"
                type: internal
        Port "eth4"
            Interface "eth4"
    Bridge br-int
        Port "tap29cbbeed-16"
            tag: 4095
            Interface "tap29cbbeed-16"
                type: internal
        Port "qr-d80e3634-a4"
            tag: 4095
            Interface "qr-d80e3634-a4"
                type: internal
        Port int-br-ex
            Interface int-br-ex
        Port br-int
            Interface br-int
                type: internal
        Port int-br-prv
            Interface int-br-prv
        Port "tapc8495313-6d"
            tag: 1
            Interface "tapc8495313-6d"
                type: internal
    Bridge br-storage
        Port br-storage
            Interface br-storage
                type: internal
        Port "br-storage--br-eth2"
            Interface "br-storage--br-eth2"
                type: patch
                options: {peer="br-eth2--br-storage"}
    Bridge br-prv
        Port "br-prv--br-eth2"
            Interface "br-prv--br-eth2"
                type: patch
                options: {peer="br-eth2--br-prv"}
        Port phy-br-prv
            Interface phy-br-prv
        Port br-prv
            Interface br-prv
                type: internal
    Bridge br-fw-admin
        Port "br-fw-admin--br-eth1"
            trunks: [0]
            Interface "br-fw-admin--br-eth1"
                type: patch
                options: {peer="br-eth1--br-fw-admin"}
        Port br-fw-admin
            Interface br-fw-admin
                type: internal
    Bridge "br-eth3"
        Port "eth3"
            Interface "eth3"
        Port "br-eth3"
            Interface "br-eth3"
                type: internal
    ovs_version: "1.10.1"

We assume, the above highlighted (green) should have similar configuration as that of yellow highlighted above. Kindly confirm us on the behavior.

Thanks and Regards,
Gandhi Rajan

From: Fuel-dev [mailto:fuel-dev-bounces+gmariapp=brocade.com@xxxxxxxxxxxxxxxxxxx] On Behalf Of Karthi Palaniappan (CW)
Sent: Thursday, July 10, 2014 10:16 PM
To: fuel-dev@xxxxxxxxxxxxxxxxxxx<mailto:fuel-dev@xxxxxxxxxxxxxxxxxxx>
Cc: Prakash Kaligotla; Nataraj Mylsamy (CW); Raghunath Mallina (CW); Senthil Thanganadarrosy (CW)
Subject: [Fuel-dev] VMs are not getting IP address

Hi Fuel-Dev,

We are done with the MOS Ice House deployment, also we are done installing Brocade ML2 VCS plugin. When we run health check below 3 checks were failing.

Failed checks:

1.       Check DNS resolution on compute node

2.       Check network connectivity from instance via floating IP

3.       Check stack autoscaling


Controller and compute node couldn't ping google.com since the nameserver couldn't resolve hostname. Andrey suggested us to change nameserver in /etc/resolve.conf, after changing nameserver #Check 1 is passed. Andrey mentioned the other 2 failed checks will not impact our plugin testing.

We tried to create network and VM, we were able to create network and Virtual machine but those Virtual machines couldn't get IP address.

Topology and configuration details:

Both controller and compute node's eth3 is connected to VDX device so, I have configured bridge mapping as "bridge_mappings = physnet1:br-eth3".

Configuration in /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
tenant_network_types = vlan
type_drivers = vlan
mechanism_drivers = openvswitch,brocade

[ml2_type_vlan]
network_vlan_ranges = physnet1:400:500

[securitygroup]
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

[database]
connection = mysql://neutron:password@192.168.0.3:3306/neutron_ml2?read_timeout=60

[ovs]
tenant_network_type = vlan
network_vlan_ranges = physnet1:400:500
bridge_mappings = physnet1:br-eth3
#bridge_mappings = physnet1:br-eth1

[ml2_brocade]
username = admin
password = password
address  = 10.25.225.133
ostype = NOS
physical_networks = physnet1

Regards,
Karthi

References