openstack team mailing list archive
-
openstack team
-
Mailing list archive
-
Message #11591
Re: Accessing VMs in Flat DHCP mode with multiple host
Hi again,
So the problem is now solved.
I hereby post the solution for people from the future.
1. the ping between the compute and the controller was using an IP
route. So the ping wasn't using only layer 2. This means that there was
no DHCP request arriving to the network controller.
2. the hosts and the VMs should be in the same subnet
3. we needed to killall dnsmasq and restart nova-network
tcpdump on br100 is useful to track dhcp requests. ARP tables are useful
as well in order to be sure each host sees the other on layer 2.
thank you all,
yours,
michaël
Michaël Van de Borne
R&D Engineer, SOA team, CETIC
Phone: +32 (0)71 49 07 45 Mobile: +32 (0)472 69 57 16, Skype: mikemowgli
www.cetic.be, rue des Frères Wright, 29/3, B-6041 Charleroi
Le 10/05/2012 15:31, Yong Sheng Gong a écrit :
HI,
First you have to make sure the network between your control node's
br100 and your compute node's br100 are connected.
and then can you show the output on control node:
ps -ef | grep dnsmasq
brctl show
ifconfig
2. can you login to your vm by vnc to see the eth0 configuration and
then try to run udhcpc?
Thanks
-----openstack-bounces+gongysh=cn.ibm.com@xxxxxxxxxxxxxxxxxxx wrote: -----
To: "openstack@xxxxxxxxxxxxxxxxxxx" <openstack@xxxxxxxxxxxxxxxxxxx>
From: Michaël Van de Borne <michael.vandeborne@xxxxxxxx>
Sent by: openstack-bounces+gongysh=cn.ibm.com@xxxxxxxxxxxxxxxxxxx
Date: 05/10/2012 09:03PM
Subject: [Openstack] Accessing VMs in Flat DHCP mode with multiple
host
Hello,
I'm running into troubles accessing my instances.
I have 3 nodes:
1. proxmox that virtualizes in KVM my controller node
1.1 the controller node (10.10.200.50) runs keystone,
nova-api, network, scheduler, vncproxy and volumes but NOT compute
as it is already a VM
2. glance in a physical node
3. compute in a physical node
my nova.conf network config is:
--dhcpbridge_flagfile=/etc/nova/nova.conf
--dhcpbridge=/usr/bin/nova-dhcpbridge
--routing_source_ip=10.10.200.50
--libvirt_use_virtio_for_bridges=true
--network_manager=nova.network.manager.FlatDHCPManager
--public_interface=eth0
--flat_interface=eth1
--flat_network_bridge=br100
--fixed_range=192.168.200.0/24
--floating_range=10.10.200.0/24
--network_size=256
--flat_network_dhcp_start=192.168.200.5
--flat_injected=False
--force_dhcp_release
--network_host=10.10.200.50
I even explicitly allows icmp and tcp port 22 traffic like this:
euca-authorize -P icmp -t -1:-1 default
euca-authorize -P tcp -p 22 default
before setting these rules, I was getting 'Operation not
permitted' when pinging the VM from the compute node. After
setting these, I just get no output at all (not even 'Destination
Host Unreachable')
The network was created like this:
nova-manage network create private
--fixed_range_v4=192.168.200.0/24 --bridge=br100
--bridge_interface=eth1 --num_networks=1 --network_size=256
However I cannot ping or ssh my instances once they're active. I
have already set up such an Essex environment but the controller
node was physical. Morevover, every examples in the doc presents a
controller node that runs nova-compute.
So I'm wondering if either:
- having the controller in a VM
- or not running compute on the controller
would prevent things to work properly.
What can I check? iptables? is dnsmasq unable to give the VM an
address?
I'm running out of ideas. Any suggestion would be highly appreciated.
Thank you,
michaël
--
Michaël Van de Borne
R&D Engineer, SOA team, CETIC
Phone: +32 (0)71 49 07 45 Mobile: +32 (0)472 69 57 16, Skype:
mikemowgli
www.cetic.be, rue des Frères Wright, 29/3, B-6041 Charleroi
_______________________________________________
Mailing list: https://launchpad.net/~openstack
<https://launchpad.net/%7Eopenstack>
Post to : openstack@xxxxxxxxxxxxxxxxxxx
Unsubscribe : https://launchpad.net/~openstack
<https://launchpad.net/%7Eopenstack>
More help : https://help.launchpad.net/ListHelp
References