← Back to team overview

openstack team mailing list archive

Re: Configuring with devstack for multiple hardware nodes

 

Hi Syd,

There should not be an additional gateway interface on the compute nodes,
only the node that has n-net in ENABLED_SERVICES. I'm assuming you want to
use the OVSQuantumPlugin? Can you also
attach /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini from your
two nodes?  Also if you are interested in trying out the folsom quantum
code the following link should help you get running:
http://wiki.openstack.org/RunningQuantumV2Api

Aaron


On Mon, Aug 6, 2012 at 4:30 PM, Syd (Sydney) Logan <slogan@xxxxxxxxxxxx>wrote:

>  Hi,****
>
> ** **
>
> I just posted the following at
> http://forums.openstack.org/viewtopic.php?f=15&t=1435, then realized this
> mailing list might be a better place to ask the question.****
>
> ** **
>
> In summary, I’ve cobbled together devstack-based nodes to exercise
> quantum/openvswitch (when I say cobbled, I mean my result is the
> combination of information from wiki and from devstack, and elsewhere to
> create my localrc files, since there is no one definitive template that I
> could use, and it seems that devstack examples are not current with what is
> happening on Folsom). One node is a controller, one is a compute node. I
> can launch using horizon on the controller, VMs launched on the controller
> are pingable, but ones launched on the compute node are not. The big
> difference I can see is a missing gateway interface on the controller (on
> gw-* displayed when I run ifconfig). By inspection of the logs, I can see
> that the VMs are unable to establish a network, and I think the missing
> gateway interface may be the root cause for that. ****
>
> ** **
>
> Below are details:****
>
> ** **
>
> Two hosts, one configured as a controller, the other configured as a
> compute node.****
>
> Each host is dual homed, network for eth0 is connected to the local
> intranet, network for eth1 is configured as a local net 192.168.3.0****
>
> On the controller host, I used devstack with the following localrc (which
> is an aggregation of stuff I found on the devstack site, and stuff I found
> recently on the quantum wiki -- it would be nice if complete templates for
> a controller and compute node supporting devstack and openvswitch were
> published on the devstack site or the wiki, perhaps since we are not yet at
> Folsom it makes sense they don't exist, if I get something working, I will
> share my configuration in the entirety at whatever is the most appropriate
> place). Anyway, controller host localrc is:****
>
> ** **
>
> HOST_IP=192.168.3.1****
>
> FLAT_INTERFACE=eth1****
>
> FIXED_RANGE=10.4.128.0/20****
>
> FIXED_NETWORK_SIZE=4096****
>
> FLOATING_RANGE=192.168.3.128/25****
>
> MULTI_HOST=True****
>
> LOGFILE=/opt/stack/logs/stack.sh.log****
>
> ADMIN_PASSWORD=password****
>
> MYSQL_PASSWORD=password****
>
> RABBIT_PASSWORD=password****
>
> SERVICE_PASSWORD=password****
>
> SERVICE_TOKEN=xyzpdqlazydog****
>
>
> ENABLED_SERVICES=g-api,g-reg,key,n-api,n-cpu,n-net,n-sch,n-vnc,horizon,mysql,rabbit,openstackx,q-svc,quantum,q-agt,q-dhcp
> ****
>
> Q_PLUGIN=openvswitch****
>
> Q_AUTH_STRATEGY=noauth****
>
> ** **
>
> If I run stack on this host, I get the following nova.conf:****
>
> ** **
>
> [DEFAULT]****
>
> verbose=True****
>
> auth_strategy=keystone****
>
> allow_resize_to_same_host=True****
>
> root_helper=sudo /usr/local/bin/nova-rootwrap /etc/nova/rootwrap.conf****
>
> compute_scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler**
> **
>
> dhcpbridge_flagfile=/etc/nova/nova.conf****
>
> fixed_range=10.4.128.0/20****
>
> s3_host=192.168.3.1****
>
> s3_port=3333****
>
> network_manager=nova.network.quantum.manager.QuantumManager****
>
> quantum_connection_host=localhost****
>
> quantum_connection_port=9696****
>
> quantum_use_dhcp=True****
>
> libvirt_vif_type=ethernet****
>
> libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtOpenVswitchDriver****
>
> linuxnet_interface_driver=nova.network.linux_net.LinuxOVSInterfaceDriver**
> **
>
>
> osapi_compute_extension=nova.api.openstack.compute.contrib.standard_extensions
> ****
>
> my_ip=192.168.3.1****
>
> public_interface=br100****
>
> vlan_interface=eth0****
>
> flat_network_bridge=br100****
>
> flat_interface=eth1****
>
> sql_connection=mysql://root:password@localhost/nova?charset=utf8****
>
> libvirt_type=kvm****
>
> libvirt_cpu_mode=none****
>
> instance_name_template=instance-%08x****
>
> novncproxy_base_url=http://192.168.3.1:6080/vnc_auto.html****
>
> xvpvncproxy_base_url=http://192.168.3.1:6081/console****
>
> vncserver_listen=127.0.0.1****
>
> vncserver_proxyclient_address=127.0.0.1****
>
> api_paste_config=/etc/nova/api-paste.ini****
>
> image_service=nova.image.glance.GlanceImageService****
>
> ec2_dmz_host=192.168.3.1****
>
> rabbit_host=localhost****
>
> rabbit_password=password****
>
> glance_api_servers=192.168.3.1:9292****
>
> force_dhcp_release=True****
>
> multi_host=True****
>
> send_arp_for_ha=True****
>
> logging_context_format_string=%(asctime)s %(color)s%(levelname)s %(name)s
> [^[[01;36m%(request_id)s ^[[00;36m%(user_name)s %(project_name)s%(color)s]
> ^[[01;35m%(instance)s%(color)s%(message)s^[[00m****
>
> logging_default_format_string=%(asctime)s %(color)s%(levelname)s %(name)s
> [^[[00;36m-%(color)s] ^[[01;35m%(instance)s%(color)s%(message)s^[[00m****
>
> logging_debug_format_suffix=^[[00;33mfrom (pid=%(process)d) %(funcName)s
> %(pathname)s:%(lineno)d^[[00m****
>
> logging_exception_prefix=%(color)s%(asctime)s TRACE %(name)s
> ^[[01;35m%(instance)s^[[00m****
>
> compute_driver=libvirt.LibvirtDriver****
>
> firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver****
>
> enabled_apis=ec2,osapi_compute,osapi_volume,metadata****
>
> ** **
>
> If I run horizon, I can launch vms and ping them. If I look at the logs
> generated by the VMs, they are able to get a network. Furthermore, I get
> the following network interface in addition to the tap interfaces generated
> for each VM:****
>
> ** **
>
> gw-4f16e8db-20 Link encap:Ethernet HWaddr fa:16:3e:08:e0:2d****
>
> inet addr:10.4.128.1 Bcast:10.4.143.255 Mask:255.255.240.0****
>
> inet6 addr: fe80::f816:3eff:fe08:e02d/64 Scope:Link****
>
> UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1****
>
> RX packets:0 errors:0 dropped:0 overruns:0 frame:0****
>
> TX packets:6 errors:0 dropped:0 overruns:0 carrier:0****
>
> collisions:0 txqueuelen:0****
>
> RX bytes:0 (0.0 B) TX bytes:468 (468.0 B)****
>
> ** **
>
> Now, for the compute node, I use the following:****
>
> ** **
>
> HOST_IP=192.168.3.2****
>
> FLAT_INTERFACE=eth1****
>
> FIXED_RANGE=10.4.128.0/20****
>
> FIXED_NETWORK_SIZE=4096****
>
> FLOATING_RANGE=192.168.3.128/25****
>
> MULTI_HOST=1****
>
> LOGFILE=/opt/stack/logs/stack.sh.log****
>
> ADMIN_PASSWORD=password****
>
> MYSQL_PASSWORD=password****
>
> RABBIT_PASSWORD=password****
>
> SERVICE_PASSWORD=password****
>
> SERVICE_TOKEN=xyzpdqlazydog****
>
> Q_HOST=192.168.3.1****
>
> MYSQL_HOST=192.168.3.1****
>
> RABBIT_HOST=192.168.3.1****
>
> GLANCE_HOSTPORT=192.168.3.1:9292****
>
> ENABLED_SERVICES=n-cpu,rabbit,g-api,n-net,quantum,q-agt****
>
> Q_PLUGIN=openvswitch****
>
> Q_AUTH_STRATEGY=noauth****
>
> ** **
>
> The resulting nova.conf is:****
>
> ** **
>
> [DEFAULT]****
>
> verbose=True****
>
> auth_strategy=keystone****
>
> allow_resize_to_same_host=True****
>
> root_helper=sudo /usr/local/bin/nova-rootwrap /etc/nova/rootwrap.conf****
>
> compute_scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler**
> **
>
> dhcpbridge_flagfile=/etc/nova/nova.conf****
>
> fixed_range=10.4.128.0/20****
>
> s3_host=192.168.3.2****
>
> s3_port=3333****
>
> network_manager=nova.network.quantum.manager.QuantumManager****
>
> quantum_connection_host=192.168.3.1****
>
> quantum_connection_port=9696****
>
> quantum_use_dhcp=True****
>
> libvirt_vif_type=ethernet****
>
> libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtOpenVswitchDriver****
>
> linuxnet_interface_driver=nova.network.linux_net.LinuxOVSInterfaceDriver**
> **
>
>
> osapi_compute_extension=nova.api.openstack.compute.contrib.standard_extensions
> ****
>
> my_ip=192.168.3.2****
>
> public_interface=br100****
>
> vlan_interface=eth0****
>
> flat_network_bridge=br100****
>
> flat_interface=eth1****
>
> sql_connection=mysql://root:password@192.168.3.1/nova?charset=utf8****
>
> libvirt_type=kvm****
>
> libvirt_cpu_mode=none****
>
> instance_name_template=instance-%08x****
>
> novncproxy_base_url=http://192.168.3.2:6080/vnc_auto.html****
>
> xvpvncproxy_base_url=http://192.168.3.2:6081/console****
>
> vncserver_listen=127.0.0.1****
>
> vncserver_proxyclient_address=127.0.0.1****
>
> api_paste_config=/etc/nova/api-paste.ini****
>
> image_service=nova.image.glance.GlanceImageService****
>
> ec2_dmz_host=192.168.3.2****
>
> rabbit_host=192.168.3.1****
>
> rabbit_password=password****
>
> glance_api_servers=192.168.3.1:9292****
>
> force_dhcp_release=True****
>
> multi_host=True****
>
> send_arp_for_ha=True****
>
> api_rate_limit=False****
>
> logging_context_format_string=%(asctime)s %(color)s%(levelname)s %(name)s
> [^[[01;36m%(request_id)s ^[[00;36m%(user_name)s %(project_name)s%(color)s]
> ^[[01;35m%(instance)s%(color)s%(message)s^[[00m****
>
> logging_default_format_string=%(asctime)s %(color)s%(levelname)s %(name)s
> [^[[00;36m-%(color)s] ^[[01;35m%(instance)s%(color)s%(message)s^[[00m****
>
> logging_debug_format_suffix=^[[00;33mfrom (pid=%(process)d) %(funcName)s
> %(pathname)s:%(lineno)d^[[00m****
>
> logging_exception_prefix=%(color)s%(asctime)s TRACE %(name)s
> ^[[01;35m%(instance)s^[[00m****
>
> compute_driver=libvirt.LibvirtDriver****
>
> firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver****
>
> ** **
>
> I can spin up VMs on this host (usually, when I run horizon on the
> controller, it is this host upon which the first VM is launched). I get
> expected IP address in the range 10.4.128.*****
>
> ** **
>
> Unlike VMs on the host, I cannot ping (from either the controller (less
> worrisome) or the compute node (very worrisome). I looked at the console
> log for the VM, it is not getting any network. The other major obvious
> difference is that there is no inteface gateway device when I do an
> ifconfig on the compute node.****
>
> ** **
>
> It is this last point (the lack of a interface gateway) that seems most
> likely to me to be the issue. Is there something I can run after launching
> devstack on the controller, before I try to launch VMs, that will cause
> that gw to be created?****
>
> ** **
>
> I did some tracebacks in the python code on the controller and it appears
> the gateway on the controller is being created by the quantum (???) service
> during its initialization (I grepped around for "gw-") to identify where I
> should be putting tracebacks. According to what I have read on the net,
> localrc should not be enabling q-svc on the controller (and this makes
> sense given I am pointing back at 192.168.3.1 for quantum, as well as other
> services).****
>
> ** **
>
> Again, hoping I mostly have the localrc contents right, and that maybe I
> just need to add some commands to the end of stack,sh to finish it off.
> Been frustrating seeing the VMs get launched only to not be able to ping
> them (but damn, that's pretty cool they spin up, don't you think?) I have a
> lot to learn still (just 2 weeks into this) but kinda stuck on this issue.
> ****
>
> ** **
>
> Regards,****
>
> ** **
>
> syd****
>
> _______________________________________________
> Mailing list: https://launchpad.net/~openstack
> Post to     : openstack@xxxxxxxxxxxxxxxxxxx
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>

Follow ups

References