openstack team mailing list archive
-
openstack team
-
Mailing list archive
-
Message #16656
Re: Private instances can't access Internet
I've just killed dnsmasq, restarted network and services. Now
everything is working! :)
I'll paste my CC nova.conf and my NODE nova.conf and the interface config
Thanks guy for helping me
** /etc/network/interfaces **
# The loopback network interface
auto lo
iface lo inet loopback
# The primary network interface
auto eth0
iface eth0 inet dhcp
# Bridge for OpenStack
auto br100
iface br100 inet static
bridge_ports eth1
bridge_stp off
bridge_fd 0
bridge_maxwait 0
address 10.5.5.2
netmask 255.255.255.0
**Cloug Controller**
#NETWORK
--network_manager=nova.network.manager.FlatDHCPManager
--firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver
--public_interface=eth0
--flat_interface=eth1
--flat_network_bridge=br100
--fixed_range=10.5.5.32/27
--network_size=32
--flat_network_dhcp_start=10.5.5.33
--my_ip=150.164.3.236
--multi_host=true
#--enabled_apis=ec2,osapi_compute,osapi_volume,metadata
--dhcpbridge_flagfile=/etc/nova/nova.conf
--dhcpbridge=/usr/bin/nova-dhcpbridge
--force_dhcp_release
--ec2_private_dns_show
--routing_source_ip=$my_ip
#VOLUMES
--iscsi_helper=tgtadm
--iscsi_ip_prefix=10.5.5
#VNC CONSOLE
--vnc_enabled=true
--vncproxy_url=http://150.164.3.236:6080
--vnc_console_proxy_url=http://150.164.3.236:6080
--novnc_enabled=true
--novncproxy_base_url=http://150.164.3.236:6080/vnc_auto.html
--vncserver_proxyclient_address=$my_ip
--vncserver_listen=$my_ip
** NODE **
#NETWORK
--network_manager=nova.network.manager.FlatDHCPManager
--firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver
--public_interface=eth0
--flat_interface=eth1
--flat_network_bridge=br100
--fixed_range=10.5.5.32/27
--network_size=32
--flat_network_dhcp_start=10.5.5.33
--my_ip=150.164.3.240
--multi_host=true
--enabled_apis=ec2,osapi_compute,osapi_volume,metadata
--dhcpbridge_flagfile=/etc/nova/nova.conf
--dhcpbridge=/usr/bin/nova-dhcpbridge
--force_dhcp_release
--ec2_private_dns_show
--routing_source_ip=$my_ip
#VOLUMES
--iscsi_helper=tgtadm
--iscsi_ip_prefix=10.5.5
#VNC CONSOLE
--vnc_enabled=true
--vncproxy_url=http://150.164.3.236:6080
--vnc_console_proxy_url=http://150.164.3.236:6080
--novnc_enabled=true
--novncproxy_base_url=http://150.164.3.236:6080/vnc_auto.html
--vncserver_proxyclient_address=$my_ip
--vncserver_listen=$my_ip
On Tue, Sep 11, 2012 at 5:50 PM, Gui Maluf <guimalufb@xxxxxxxxx> wrote:
> As I said, now I'm trying to put multi_host working.
>
> Now, this is my situation:
> Instances running on node(nova-{network,compute,volume}) can reach
> Internet, reach metadata server and get correct route table (ip addr
> 10.5.5.33), I can even ping from outside the LAN(using cc-node as
> gateway to 10.5.5.0 network)
> Instances running on cc-node(nova-* + other services) can't reach
> Internet, can't reach metadata server cause they get the route table
> with IP addr 192.168.1.33 (libvirt network??). I can't ping neither
> inside or outside the lan.
>
> The difference from the files I pasted is:
> Node:
> --my_ip=150.164.3.240
> --multi_host=true
> --enabled_apis=ec2,osapi_compute,osapi_volume,metadata
> --routing_source_ip=150.164.3.240
>
> CC:
> --my_ip=150.164.3.239
> --multi_host=true
> --routing_source_ip=150.164.3.239
>
> Creating network with: nova-manage network create private
> --fixed_range_v4=10.5.5.32/27 --num_networks=1 --bridge=br100
> --bridge_interface=eth1 --network_size=32
>
>
> If I create the network(nova-manage network create) with
> --multi_host=T, node and cc-node can't reach metadata server cause
> both get 192.168.1.X gateway or the public ip gateway(150.164.x.x).
> If I put --enabled_apis=ec2,osapi_compute,osapi_volume,metadata on
> cc-node, the same problem above happens.
>
> I'm really confuse, and spending a lot of time to put network working.
> I would really appreciate if someone could help;
>
> If I'd miss some information, please let me know!
>
> Thanks
>
> On Tue, Sep 11, 2012 at 3:43 PM, Gui Maluf <guimalufb@xxxxxxxxx> wrote:
>> My node and my CC are connected through a switch, both has access to
>> Internet. I've tried to use multi_host, but with no success. I'm
>> trying again to set up with multi_host, since makes more sense in my
>> network setup.
>>
>> After I set up multi_host=true and
>> enabled_apis=ec2,osapi_compute,osapi_volume,metadata my node is
>> working but my cc-node stop to work, instances running on my cc-node
>> can't get the right route table(192.x.x.x rather then 10.x.x.x) and
>> can't reach metadata server(this is the same issue I faced using
>> multi_host).
>>
>>
>>
>> On Tue, Sep 11, 2012 at 3:04 PM, Ritesh Nanda <riteshnanda09@xxxxxxxxx> wrote:
>>>
>>> Hello Gui,
>>>
>>> your Config file shows you are using --multi-host.
>>> In case you don't use multi-host all traffic would leave to Internet from
>>> the controller node.
>>> Just in case how is your two node connected , are they connected directly
>>> or using a switch.
>>>
>>>
>>> On Tue, Sep 11, 2012 at 11:01 PM, Gui Maluf <guimalufb@xxxxxxxxx> wrote:
>>>>
>>>> I'm facing the same problem and I can't solve it!
>>>> Please, someone help us!
>>>> Instances from cc-node can reach Internet, but the node instances can't!
>>>>
>>>> CC-node configs: http://paste.openstack.org/show/20861/
>>>> Node configs: http://paste.openstack.org/show/20862/
>>>>
>>>> ps: i'm not using multi_host
>>>>
>>>> I've tried many things, but I can't make my instance on node reach
>>>> internet.
>>>>
>>>>
>>>>> Dave Pigott
>>>>> Mon, 10 Sep 2012 03:09:34 -0700
>>>>>
>>>>>
>>>>> Hi Jason,
>>>>>
>>>>> Try setting --multi_host in nova.conf
>>>>>
>>>>> Dave
>>>>>
>>>>> Sent from my Aldis Lamp
>>>>>
>>>>> On 7 Sep 2012, at 20:50, Jason Cooper <ja...@xxxxxxxxxx> wrote:
>>>>>
>>>>> > Hi Everyone. I just completed the steps in the OpenStack Compute
>>>>> > Starter
>>>>> > Guide to get OpenStack up and running on my server, and everything is
>>>>> > working
>>>>> > wonderfully except that my private instances cannot access the public
>>>>> > Internet.
>>>>> >
>>>>> > I have configured the physical server on which OpenStack is running to
>>>>> > access
>>>>> > the public Internet over eth0. I have also set up an internal network
>>>>> > on eth1
>>>>> > with a bridge so the instances, which all have fixed private IP
>>>>> > addresses
>>>>> > (e.g. 192.168.4.x) should be able to ping the public Internet through
>>>>> > this
>>>>> > bridge. However, this isn't working, and I'm hoping you can help
>>>>> > explain what
>>>>> > I'm doing wrong.
>>>>> >
>>>>> > I have already tried to setup IP forwarding by following the
>>>>> > instructions at
>>>>> > https://lists.launchpad.net/openstack/msg15559.html, but this did not
>>>>> > help.
>>>>> >
>>>>> > Here is my /etc/network/interfaces:
>>>>> >
>>>>> > # The loopback network interface
>>>>> > auto lo
>>>>> > iface lo inet loopback
>>>>> >
>>>>> > # The primary network interface
>>>>> > auto eth0
>>>>> > iface eth0 inet static
>>>>> > address 10.0.1.130
>>>>> > netmask 255.255.0.0
>>>>> > broadcast 10.0.1.255
>>>>> > gateway 10.0.0.1
>>>>> > dns-nameservers 8.8.8.8
>>>>> >
>>>>> > auto eth1
>>>>> > iface eth1 inet static
>>>>> > address 192.168.3.1
>>>>> > netmask 255.255.255.0
>>>>> > network 192.168.3.0
>>>>> > broadcast 192.168.3.255
>>>>> >
>>>>> >
>>>>> > And here is my /etc/nova/nova.conf:
>>>>> >
>>>>> > --dhcpbridge_flagfile=/etc/nova/nova.conf
>>>>> > --dhcpbridge=/usr/bin/nova-dhcpbridge
>>>>> > --logdir=/var/log/nova
>>>>> > --state_path=/var/lib/nova
>>>>> > --lock_path=/var/lock/nova
>>>>> > --allow_admin_api=true
>>>>> > --use_deprecated_auth=false
>>>>> > --auth_strategy=keystone
>>>>> > --scheduler_driver=nova.scheduler.simple.SimpleScheduler
>>>>> > --s3_host=10.0.1.130
>>>>> > --ec2_host=10.0.1.130
>>>>> > --rabbit_host=10.0.1.130
>>>>> > --cc_host=10.0.1.130
>>>>> > --nova_url=http://10.0.1.130:8774/v1.1/
>>>>> > --routing_source_ip=10.0.1.130
>>>>> > --glance_api_servers=10.0.1.130:9292
>>>>> > --image_service=nova.image.glance.GlanceImageService
>>>>> > --iscsi_ip_prefix=192.168.4
>>>>> > --sql_connection=mysql://novadbadmin:novasecret@10.0.1.130/nova
>>>>> > --ec2_url=http://10.0.1.130:8773/services/Cloud
>>>>> > --keystone_ec2_url=http://10.0.1.130:5000/v2.0/ec2tokens
>>>>> > --api_paste_config=/etc/nova/api-paste.ini
>>>>> > --libvirt_type=kvm
>>>>> > --libvirt_use_virtio_for_bridges=true
>>>>> > --start_guests_on_host_boot=true
>>>>> > --resume_guests_state_on_host_boot=true
>>>>> > # vnc specific configuration
>>>>> > --novnc_enabled=true
>>>>> > --novncproxy_base_url=http://10.0.1.130:6080/vnc_auto.html
>>>>> > --vncserver_proxyclient_address=10.0.1.130
>>>>> > --vncserver_listen=10.0.1.130
>>>>> > # network specific settings
>>>>> > --network_manager=nova.network.manager.FlatDHCPManager
>>>>> > --public_interface=eth0
>>>>> > --flat_interface=eth1
>>>>> > --flat_network_bridge=br100
>>>>> > --fixed_range=192.168.4.1/27
>>>>> > #--floating_range=10.10.10.2/27
>>>>> > --network_size=32
>>>>> > --flat_network_dhcp_start=192.168.4.33
>>>>> > --flat_injected=False
>>>>> > --force_dhcp_release
>>>>> > --iscsi_helper=tgtadm
>>>>> > --connection_type=libvirt
>>>>> > --root_helper=sudo nova-rootwrap
>>>>> > --verbose
>>>>> >
>>>>> >
>>>>> > Lastly, here is the command I used to create the network:
>>>>> >
>>>>> > sudo nova-manage network create private
>>>>> > --fixed_range_v4=192.168.4.32/27
>>>>> > --num_networks=1 --bridge=br100 --bridge_interface=eth1
>>>>> > --network_size=32
>>>>> >
>>>>> >
>>>>> > You can see that I'm not using a floating IP range. My instances are
>>>>> > able to
>>>>> > access the public Internet if I change my configuration to use a
>>>>> > floating
>>>>> > range, but I prefer to find a solution that allows me to assign an
>>>>> > internal
>>>>> > IP to my instances and use the specified bridge to contact the outside
>>>>> > world.
>>>>> >
>>>>> > Any help is appreciated, and many thanks in advance.
>>>>> > - Jason
>>>>> > _______________________________________________
>>>>> > Mailing list: https://launchpad.net/~openstack
>>>>> > Post to : openstack@xxxxxxxxxxxxxxxxxxx
>>>>> > Unsubscribe : https://launchpad.net/~openstack
>>>>> > More help : https://help.launchpad.net/ListHelp
>>>>>
>>>>> _______________________________________________
>>>>> Mailing list: https://launchpad.net/~openstack
>>>>> Post to : openstack@xxxxxxxxxxxxxxxxxxx
>>>>> Unsubscribe : https://launchpad.net/~openstack
>>>>> More help : https://help.launchpad.net/ListHelp
>>>>
>>>>
>>>>
>>>> --
>>>> guilherme \n
>>>> \tab maluf
>>>>
>>>> "Dominar-se a si próprio é uma vitória maior do que vencer a milhares em
>>>> uma batalha." Sakyamuni
>>>>
>>>> _______________________________________________
>>>> Mailing list: https://launchpad.net/~openstack
>>>> Post to : openstack@xxxxxxxxxxxxxxxxxxx
>>>> Unsubscribe : https://launchpad.net/~openstack
>>>> More help : https://help.launchpad.net/ListHelp
>>>>
>>>
>>>
>>>
>>> --
>>>
>>> With Regards
>>>
>>> Ritesh Nanda
>>>
>>>
>>>
>>>
>>>
>>
>>
>>
>> --
>> guilherme \n
>> \tab maluf
>>
>> "Dominar-se a si próprio é uma vitória maior do que vencer a milhares em uma
>> batalha." Sakyamuni
>
>
>
> --
> guilherme \n
> \tab maluf
>
> "Dominar-se a si próprio é uma vitória maior do que vencer a milhares
> em uma batalha." Sakyamuni
--
guilherme \n
\tab maluf
"Dominar-se a si próprio é uma vitória maior do que vencer a milhares
em uma batalha." Sakyamuni
References