openstack team mailing list archive
-
openstack team
-
Mailing list archive
-
Message #16460
Re: [OpenStack][Nova]Problems and questions regarding network and/or routing
All compute nodes have /proc/sys/net/ipv4/ip_forward set to 1. So that
can't be the issue :s
On Wed, Sep 5, 2012 at 3:11 PM, George Mihaiescu <George.Mihaiescu@xxxxxx>wrote:
> **
>
> Hi Leander,****
>
> ** **
>
> Make sure you have ip forward enabled on the nova-compute nodes (that now
> act as nova-network as well).****
>
> Second, each nova-network acts as a gateway for each project and it needs
> an IP address, so probably this explains the “phantom” 10.0.108.4,
> 10.0.108.6, 10.0.108.8 and 10.0.108.4.10 addresses.****
>
> ** **
>
> George****
>
> ** **
> ------------------------------
>
> *From:* openstack-bounces+george.mihaiescu=q9.com@xxxxxxxxxxxxxxxxxxx[mailto:
> openstack-bounces+george.mihaiescu=q9.com@xxxxxxxxxxxxxxxxxxx] *On Behalf
> Of *Leander Bessa Beernaert
> *Sent:* Wednesday, September 05, 2012 9:49 AM
> *To:* Vishvananda Ishaya
> *Cc:* openstack@xxxxxxxxxxxxxxxxxxx
> *Subject:* Re: [Openstack] [OpenStack][Nova]Problems and questions
> regarding network and/or routing****
>
> ** **
>
> I'm having the strangest issue. I have set up a separate OpenStack cluster
> to test out the multi-host setup. ****
>
> ** **
>
> I have one controller node and 4 compute nodes. Each compute node is
> running nova-network, nova-compute and nova-api-metadata. I have set up a
> tenant with the a multi-host network on the address range 10.0.108.0/24. *
> ***
>
> ** **
>
> I launched 4 instances to fill up the compute nodes:****
>
> ** **
>
>
> +--------------------------------------+------+--------+----------------------------+
> ****
>
> | ID | Name | Status | Networks
> |****
>
>
> +--------------------------------------+------+--------+----------------------------+
> ****
>
> | 2c63cc3e-7c45-4e10-8ac5-480fa60d4f32 | Test | ACTIVE |
> project_network=10.0.108.7 |****
>
> | c48f6aae-0d97-4e69-a398-8cda929c310d | Test | ACTIVE |
> project_network=10.0.108.3 |****
>
> | ed8f11a4-5fc0-4437-9ae2-b6725126fca7 | Test | ACTIVE |
> project_network=10.0.108.5 |****
>
> | fe39e586-030c-4bf4-9020-7ef773567913 | Test | ACTIVE |
> project_network=10.0.108.9 |****
>
>
> +--------------------------------------+------+--------+----------------------------+
> ****
>
> ** **
>
> One thing i found odd at the beginning was the fact that the instances are
> using only odd addresses. **The** installation is clean and no instances
> have been launched before, so all the addresses are available. ****
>
> ** **
>
> **The** problem now is that i can only ping instances form the compute
> nodes. I am unable to ping any instance from the controller node. Stranger
> yet, is the fact that i can ping non-existent address such as
> 10.0.108.4,10.0.108.6,10.0.108.8 and 10.0.108.4.10. ****
>
> I have also no connectivity from within the instances to the outside world.
> ****
>
> ** **
>
> Has this happend to anyone before?****
>
> ** **
>
> ** **
>
> On Tue, Sep 4, 2012 at 11:38 PM, Vishvananda Ishaya <vishvananda@xxxxxxxxx>
> wrote:****
>
> ** **
>
> On Sep 4, 2012, at 3:01 PM, Leander Bessa Beernaert <leanderbb@xxxxxxxxx>
> wrote:****
>
>
>
> ****
>
> Question follows inlined below.****
>
> On Tue, Sep 4, 2012 at 6:48 PM, Vishvananda Ishaya <vishvananda@xxxxxxxxx>
> wrote:****
>
> ** **
>
> On Sep 4, 2012, at 8:35 AM, Leander Bessa Beernaert <leanderbb@xxxxxxxxx>
> wrote:****
>
>
>
> ****
>
> Hello all,****
>
> ** **
>
> I've had a few reports from users testing out the sample installation of
> OpenStack i setup. **The** reports were all related to problems with
> inter-vm network speeds and connection timeouts as well as the inability to
> connect to the outside word from within the VM (e.g.: ping www.google.com). I'm
> not sure if i setup something wrong, so i have a few questions.****
>
> ** **
>
> **The** current installation of OpenStack is running with 1 controller
> node, and 8 compute nodes. Each node is running Ubuntu 12.04 and Essex with
> the default packages. I'm using the VLAN network manager. **The**re is
> one peculiarity to this setup. Since each physical hosts only has 1 network
> interface, i came up with the following configuration:****
>
> - For inter-node communications i set up a VLAN with the ID 107****
>
> - Each tentant has it's private network on a separate VLAN. Currently
> there are two tenant, one on VLAN 109 (network: 10.0.9.0/24) and another
> on VLAN 110 (network: 10.0.9.0/24). ****
>
> ** **
>
> I'm not a network expert, so please bear with me if i make any outrages
> statements. ****
>
> ** **
>
> 1) When communicating on the private network, the packets are not routed
> through the controller right? That only happens when the VM needs to
> contact an external source (e.g.: google), correct? This report originated
> from users from VLAN 109. **The**y are using network intensive
> applications which send a lot of data between each of the instances. **The
> **y reported various time-out and connection drops as well as slow
> transfer speeds. I'm no network expert, but could this be related to the
> routing, VLANs or is it a hardware issue?****
>
> ** **
>
> **The**re are a lot of things that could cause this. You would need to do
> some extensive debugging to find the source of this.****
>
> ** **
>
> Any ideas where i can start looking? ****
>
> Also, communications between two VMs on different compute nodes from the
> same tenant do not need to be routed through the controller node right?***
> *
>
> ** **
>
> in non-multi_host mode I believe it will go through the controller.****
>
> ** **
>
> Vish****
>
> ** **
>
>
>
> ****
>
> ** **
>
> -- ****
>
> Cumprimentos / Regards,****
>
> Leander****
>
--
Cumprimentos / Regards,
Leander
Follow ups
References