← Back to team overview

openstack team mailing list archive

Re: Configuring with devstack for multiple hardware nodes

 

Took out q-dhcp from controller (recall seeing it mentioned as being needed on controller, but not the compute nodes). Regardless, didn't seem to make a difference.

I then did a ps | grep ovs_quantum_agent. Was running on the controller and not the compute node.

I tried copy and pasting the command line, e.g.:

sudo python /opt/stack/quantum/quantum/plugins/openvswitch/agent/ovs_quantum_agent.py --config-file /etc/quantum/quantum.conf --config-file /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini

and it complained about missing files in /etc/quantum, so I copied and restarted the above agent. Went back to the controller, fired up horizon, and basically nothing changed.

(was copy and pasting the command line the right thing to do?)
(I didn't see any changes to make in the files copied from /etc/quantum, a few lines referenced the IP of the controller node, but I didn't change them).

If 1) this process needed to be running and it wasn't and 2) the requisite files were missing in /etc/quantum, one cannot be surprised that other issues still exist on the controller node. Unless something obvious comes to mind, I'd rather not spin my wheels on V1 if you convince me that the V2 instructions you pointed me at are the way to go for multi-node setup, and things are stable enough for me to want to do so (I am not deploying anything, just doing some initial research into quantum and openstack, so no animals will be harmed in making this movie :-).

I think what I might do (unless you have other things I should look at, and thank you so much for the help so far), is wipe both nodes, reinstall Ubuntu 12.04, and see if I can make the V2 instructions for multi-node work. I think you guys are more interested in me playing there anyway, right? Before I do that, is horizon supported (the instructions for V2 didn't mention horizon, give an example of spinning up a VM from the command line)? I'd be happy to use command line but want to know what to expect).

By the way, a week ago (before I got some hardware) I was able to spin up a quantum-based, qemu-backed single node deployment inside an Ubuntu VM hosted on VMware ESX. That's was pretty mind-blowing :) I can't wait for my mind to be blown when this multi-node setup works for me, so thanks again for the help.

syd

From: Aaron Rosen [mailto:arosen@xxxxxxxxxx]
Sent: Monday, August 06, 2012 5:47 PM
To: Syd (Sydney) Logan
Cc: openstack@xxxxxxxxxxxxxxxxxxx
Subject: Re: [Openstack] Configuring with devstack for multiple hardware nodes

Hi Syd,

Oops I didn't see you had q-dhcp set. You should disable this unless you are trying to use  quantum with the v2 api (I.E: NOVA_USE_QUANTUM_API=v2).

My guess would be that something is wrong with the tunnels which is why DHCP isn't working. Can you confirm that ovs_quantum_agent is running on all your nodes?

Thanks,

Aaron

On Mon, Aug 6, 2012 at 5:28 PM, Syd (Sydney) Logan <slogan@xxxxxxxxxxxx<mailto:slogan@xxxxxxxxxxxx>> wrote:
Aaron,

+1 for the quick response! Below I've pasted in the *plugin.ini files.
+1 also for the link to the v2 api page - I'll take a look (while you respond to this e-mail :).

If in fact the bridge interface is not the issue (thanks for clarifying), I guess the question is what generally would keep a VM from being able to acquire an IP address via DHCP? I've read posts that clarify that q_dhcp is not used for this purpose, so we don't have to go down that path. Looks like the VM is unable to get out on the proper net to do its DHCP (which looks to be achieved via google's public DHCP server (see https://developers.google.com/speed/public-dns/docs/using)<https://developers.google.com/speed/public-dns/docs/using%29>). Guessed (wrongly it appears) that the gw bridge was created to provide that ability.


syd

The ovs_quantum_plugin.ini from the controller:

[DATABASE]
# This line MUST be changed to actually run the plugin.
# Example:
# sql_connection = mysql://root:nova@127.0.0.1:3306/ovs_quantum<http://root:nova@127.0.0.1:3306/ovs_quantum>
# Replace 127.0.0.1 above with the IP address of the database used by the
# main quantum server. (Leave it as is if the database runs on this host.)
sql_connection = mysql://root:password@localhost/ovs_quantum?charset=utf8
# Database reconnection retry times - in event connectivity is lost
# set to -1 implies an infinite retry count
# sql_max_retries = 10
# Database reconnection interval in seconds - in event connectivity is lost
reconnect_interval = 2

[OVS]
# This enables the new OVSQuantumTunnelAgent which enables tunneling
# between hybervisors. Leave it set to False or omit for legacy behavior.
enable_tunneling = True

# Do not change this parameter unless you have a good reason to.
# This is the name of the OVS integration bridge. There is one per hypervisor.
# The integration bridge acts as a virtual "patch port". All VM VIFs are
# attached to this bridge and then "patched" according to their network
# connectivity.
integration_bridge = br-int

# Only used if enable-tunneling (above) is True.
# In most cases, the default value should be fine.
tunnel_bridge = br-tun

# Uncomment this line if enable-tunneling is True above.
# Set local-ip to be the local IP address of this hypervisor.
local_ip = 192.168.3.1

# Uncomment if you want to use custom VLAN range.
# vlan_min = 1
# vlan_max = 4094

[AGENT]
# Agent's polling interval in seconds
polling_interval = 2
# Change to "sudo quantum-rootwrap" to limit commands that can be run
# as root.
root_helper = sudo
# Use Quantumv2 API
target_v2_api = False

#-----------------------------------------------------------------------------
# Sample Configurations.
#-----------------------------------------------------------------------------
#
# 1. Without tunneling.
# [DATABASE]
# sql_connection = mysql://root:nova@127.0.0.1:3306/ovs_quantum<http://root:nova@127.0.0.1:3306/ovs_quantum>
# [OVS]
enable_tunneling = True
# integration_bridge = br-int
# [AGENT]
# root_helper = sudo
# Add the following setting, if you want to log to a file
# log_file = /var/log/quantum/ovs_quantum_agent.log
# Use Quantumv2 API
# target_v2_api = False
#
# 2. With tunneling.
# [DATABASE]
# sql_connection = mysql://root:nova@127.0.0.1:3306/ovs_quantum<http://root:nova@127.0.0.1:3306/ovs_quantum>
# [OVS]
enable_tunneling = True
# integration_bridge = br-int
# tunnel_bridge = br-tun
# remote-ip-file = /opt/stack/remote-ips.txt
local_ip = 192.168.3.1
# [AGENT]
# root_helper = sudo


And from the compute node:


[DATABASE]
# This line MUST be changed to actually run the plugin.
# Example:
# sql_connection = mysql://root:nova@127.0.0.1:3306/ovs_quantum<http://root:nova@127.0.0.1:3306/ovs_quantum>
# Replace 127.0.0.1 above with the IP address of the database used by the
# main quantum server. (Leave it as is if the database runs on this host.)
sql_connection = mysql://root:password@192.168.3.1/ovs_quantum?charset=utf8
# Database reconnection retry times - in event connectivity is lost
# set to -1 implies an infinite retry count
# sql_max_retries = 10
# Database reconnection interval in seconds - in event connectivity is lost
reconnect_interval = 2

[OVS]
# This enables the new OVSQuantumTunnelAgent which enables tunneling
# between hybervisors. Leave it set to False or omit for legacy behavior.
enable_tunneling = True

# Do not change this parameter unless you have a good reason to.
# This is the name of the OVS integration bridge. There is one per hypervisor.
# The integration bridge acts as a virtual "patch port". All VM VIFs are
# attached to this bridge and then "patched" according to their network
# connectivity.
integration_bridge = br-int

# Only used if enable-tunneling (above) is True.
# In most cases, the default value should be fine.
tunnel_bridge = br-tun

# Uncomment this line if enable-tunneling is True above.
# Set local-ip to be the local IP address of this hypervisor.
local_ip = 192.168.3.2

# Uncomment if you want to use custom VLAN range.
# vlan_min = 1
# vlan_max = 4094

[AGENT]
# Agent's polling interval in seconds
polling_interval = 2
# Change to "sudo quantum-rootwrap" to limit commands that can be run
# as root.
root_helper = sudo
# Use Quantumv2 API
target_v2_api = False

#-----------------------------------------------------------------------------
# Sample Configurations.
#-----------------------------------------------------------------------------
#
# 1. Without tunneling.
# [DATABASE]
# sql_connection = mysql://root:nova@127.0.0.1:3306/ovs_quantum<http://root:nova@127.0.0.1:3306/ovs_quantum>
# [OVS]
enable_tunneling = True
# integration_bridge = br-int
# [AGENT]
# root_helper = sudo
# Add the following setting, if you want to log to a file
# log_file = /var/log/quantum/ovs_quantum_agent.log
# Use Quantumv2 API
# target_v2_api = False
#
# 2. With tunneling.
# [DATABASE]
# sql_connection = mysql://root:nova@127.0.0.1:3306/ovs_quantum<http://root:nova@127.0.0.1:3306/ovs_quantum>
# [OVS]
enable_tunneling = True
# integration_bridge = br-int
# tunnel_bridge = br-tun
# remote-ip-file = /opt/stack/remote-ips.txt
local_ip = 192.168.3.2
# [AGENT]
# root_helper = sudo

From: Aaron Rosen [mailto:arosen@xxxxxxxxxx<mailto:arosen@xxxxxxxxxx>]
Sent: Monday, August 06, 2012 5:13 PM
To: Syd (Sydney) Logan
Cc: openstack@xxxxxxxxxxxxxxxxxxx<mailto:openstack@xxxxxxxxxxxxxxxxxxx>
Subject: Re: [Openstack] Configuring with devstack for multiple hardware nodes

Hi Syd,

There should not be an additional gateway interface on the compute nodes, only the node that has n-net in ENABLED_SERVICES. I'm assuming you want to use the OVSQuantumPlugin? Can you also attach /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini from your two nodes?  Also if you are interested in trying out the folsom quantum code the following link should help you get running: http://wiki.openstack.org/RunningQuantumV2Api

Aaron


On Mon, Aug 6, 2012 at 4:30 PM, Syd (Sydney) Logan <slogan@xxxxxxxxxxxx<mailto:slogan@xxxxxxxxxxxx>> wrote:
Hi,

I just posted the following at http://forums.openstack.org/viewtopic.php?f=15&t=1435, then realized this mailing list might be a better place to ask the question.

In summary, I've cobbled together devstack-based nodes to exercise quantum/openvswitch (when I say cobbled, I mean my result is the combination of information from wiki and from devstack, and elsewhere to create my localrc files, since there is no one definitive template that I could use, and it seems that devstack examples are not current with what is happening on Folsom). One node is a controller, one is a compute node. I can launch using horizon on the controller, VMs launched on the controller are pingable, but ones launched on the compute node are not. The big difference I can see is a missing gateway interface on the controller (on gw-* displayed when I run ifconfig). By inspection of the logs, I can see that the VMs are unable to establish a network, and I think the missing gateway interface may be the root cause for that.

Below are details:

Two hosts, one configured as a controller, the other configured as a compute node.
Each host is dual homed, network for eth0 is connected to the local intranet, network for eth1 is configured as a local net 192.168.3.0
On the controller host, I used devstack with the following localrc (which is an aggregation of stuff I found on the devstack site, and stuff I found recently on the quantum wiki -- it would be nice if complete templates for a controller and compute node supporting devstack and openvswitch were published on the devstack site or the wiki, perhaps since we are not yet at Folsom it makes sense they don't exist, if I get something working, I will share my configuration in the entirety at whatever is the most appropriate place). Anyway, controller host localrc is:

HOST_IP=192.168.3.1
FLAT_INTERFACE=eth1
FIXED_RANGE=10.4.128.0/20<http://10.4.128.0/20>
FIXED_NETWORK_SIZE=4096
FLOATING_RANGE=192.168.3.128/25<http://192.168.3.128/25>
MULTI_HOST=True
LOGFILE=/opt/stack/logs/stack.sh.log
ADMIN_PASSWORD=password
MYSQL_PASSWORD=password
RABBIT_PASSWORD=password
SERVICE_PASSWORD=password
SERVICE_TOKEN=xyzpdqlazydog
ENABLED_SERVICES=g-api,g-reg,key,n-api,n-cpu,n-net,n-sch,n-vnc,horizon,mysql,rabbit,openstackx,q-svc,quantum,q-agt,q-dhcp
Q_PLUGIN=openvswitch
Q_AUTH_STRATEGY=noauth

If I run stack on this host, I get the following nova.conf:

[DEFAULT]
verbose=True
auth_strategy=keystone
allow_resize_to_same_host=True
root_helper=sudo /usr/local/bin/nova-rootwrap /etc/nova/rootwrap.conf
compute_scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler
dhcpbridge_flagfile=/etc/nova/nova.conf
fixed_range=10.4.128.0/20<http://10.4.128.0/20>
s3_host=192.168.3.1
s3_port=3333
network_manager=nova.network.quantum.manager.QuantumManager
quantum_connection_host=localhost
quantum_connection_port=9696
quantum_use_dhcp=True
libvirt_vif_type=ethernet
libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtOpenVswitchDriver
linuxnet_interface_driver=nova.network.linux_net.LinuxOVSInterfaceDriver
osapi_compute_extension=nova.api.openstack.compute.contrib.standard_extensions
my_ip=192.168.3.1
public_interface=br100
vlan_interface=eth0
flat_network_bridge=br100
flat_interface=eth1
sql_connection=mysql://root:password@localhost/nova?charset=utf8
libvirt_type=kvm
libvirt_cpu_mode=none
instance_name_template=instance-%08x
novncproxy_base_url=http://192.168.3.1:6080/vnc_auto.html
xvpvncproxy_base_url=http://192.168.3.1:6081/console
vncserver_listen=127.0.0.1
vncserver_proxyclient_address=127.0.0.1
api_paste_config=/etc/nova/api-paste.ini
image_service=nova.image.glance.GlanceImageService
ec2_dmz_host=192.168.3.1
rabbit_host=localhost
rabbit_password=password
glance_api_servers=192.168.3.1:9292<http://192.168.3.1:9292>
force_dhcp_release=True
multi_host=True
send_arp_for_ha=True
logging_context_format_string=%(asctime)s %(color)s%(levelname)s %(name)s [^[[01;36m%(request_id)s ^[[00;36m%(user_name)s %(project_name)s%(color)s] ^[[01;35m%(instance)s%(color)s%(message)s^[[00m
logging_default_format_string=%(asctime)s %(color)s%(levelname)s %(name)s [^[[00;36m-%(color)s] ^[[01;35m%(instance)s%(color)s%(message)s^[[00m
logging_debug_format_suffix=^[[00;33mfrom (pid=%(process)d) %(funcName)s %(pathname)s:%(lineno)d^[[00m
logging_exception_prefix=%(color)s%(asctime)s TRACE %(name)s ^[[01;35m%(instance)s^[[00m
compute_driver=libvirt.LibvirtDriver
firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver
enabled_apis=ec2,osapi_compute,osapi_volume,metadata

If I run horizon, I can launch vms and ping them. If I look at the logs generated by the VMs, they are able to get a network. Furthermore, I get the following network interface in addition to the tap interfaces generated for each VM:

gw-4f16e8db-20 Link encap:Ethernet HWaddr fa:16:3e:08:e0:2d
inet addr:10.4.128.1 Bcast:10.4.143.255 Mask:255.255.240.0
inet6 addr: fe80::f816:3eff:fe08:e02d/64 Scope:Link
UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:468 (468.0 B)

Now, for the compute node, I use the following:

HOST_IP=192.168.3.2
FLAT_INTERFACE=eth1
FIXED_RANGE=10.4.128.0/20<http://10.4.128.0/20>
FIXED_NETWORK_SIZE=4096
FLOATING_RANGE=192.168.3.128/25<http://192.168.3.128/25>
MULTI_HOST=1
LOGFILE=/opt/stack/logs/stack.sh.log
ADMIN_PASSWORD=password
MYSQL_PASSWORD=password
RABBIT_PASSWORD=password
SERVICE_PASSWORD=password
SERVICE_TOKEN=xyzpdqlazydog
Q_HOST=192.168.3.1
MYSQL_HOST=192.168.3.1
RABBIT_HOST=192.168.3.1
GLANCE_HOSTPORT=192.168.3.1:9292<http://192.168.3.1:9292>
ENABLED_SERVICES=n-cpu,rabbit,g-api,n-net,quantum,q-agt
Q_PLUGIN=openvswitch
Q_AUTH_STRATEGY=noauth

The resulting nova.conf is:

[DEFAULT]
verbose=True
auth_strategy=keystone
allow_resize_to_same_host=True
root_helper=sudo /usr/local/bin/nova-rootwrap /etc/nova/rootwrap.conf
compute_scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler
dhcpbridge_flagfile=/etc/nova/nova.conf
fixed_range=10.4.128.0/20<http://10.4.128.0/20>
s3_host=192.168.3.2
s3_port=3333
network_manager=nova.network.quantum.manager.QuantumManager
quantum_connection_host=192.168.3.1
quantum_connection_port=9696
quantum_use_dhcp=True
libvirt_vif_type=ethernet
libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtOpenVswitchDriver
linuxnet_interface_driver=nova.network.linux_net.LinuxOVSInterfaceDriver
osapi_compute_extension=nova.api.openstack.compute.contrib.standard_extensions
my_ip=192.168.3.2
public_interface=br100
vlan_interface=eth0
flat_network_bridge=br100
flat_interface=eth1
sql_connection=mysql://root:password@192.168.3.1/nova?charset=utf8<http://root:password@192.168.3.1/nova?charset=utf8>
libvirt_type=kvm
libvirt_cpu_mode=none
instance_name_template=instance-%08x
novncproxy_base_url=http://192.168.3.2:6080/vnc_auto.html
xvpvncproxy_base_url=http://192.168.3.2:6081/console
vncserver_listen=127.0.0.1
vncserver_proxyclient_address=127.0.0.1
api_paste_config=/etc/nova/api-paste.ini
image_service=nova.image.glance.GlanceImageService
ec2_dmz_host=192.168.3.2
rabbit_host=192.168.3.1
rabbit_password=password
glance_api_servers=192.168.3.1:9292<http://192.168.3.1:9292>
force_dhcp_release=True
multi_host=True
send_arp_for_ha=True
api_rate_limit=False
logging_context_format_string=%(asctime)s %(color)s%(levelname)s %(name)s [^[[01;36m%(request_id)s ^[[00;36m%(user_name)s %(project_name)s%(color)s] ^[[01;35m%(instance)s%(color)s%(message)s^[[00m
logging_default_format_string=%(asctime)s %(color)s%(levelname)s %(name)s [^[[00;36m-%(color)s] ^[[01;35m%(instance)s%(color)s%(message)s^[[00m
logging_debug_format_suffix=^[[00;33mfrom (pid=%(process)d) %(funcName)s %(pathname)s:%(lineno)d^[[00m
logging_exception_prefix=%(color)s%(asctime)s TRACE %(name)s ^[[01;35m%(instance)s^[[00m
compute_driver=libvirt.LibvirtDriver
firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver

I can spin up VMs on this host (usually, when I run horizon on the controller, it is this host upon which the first VM is launched). I get expected IP address in the range 10.4.128.*

Unlike VMs on the host, I cannot ping (from either the controller (less worrisome) or the compute node (very worrisome). I looked at the console log for the VM, it is not getting any network. The other major obvious difference is that there is no inteface gateway device when I do an ifconfig on the compute node.

It is this last point (the lack of a interface gateway) that seems most likely to me to be the issue. Is there something I can run after launching devstack on the controller, before I try to launch VMs, that will cause that gw to be created?

I did some tracebacks in the python code on the controller and it appears the gateway on the controller is being created by the quantum (???) service during its initialization (I grepped around for "gw-") to identify where I should be putting tracebacks. According to what I have read on the net, localrc should not be enabling q-svc on the controller (and this makes sense given I am pointing back at 192.168.3.1 for quantum, as well as other services).

Again, hoping I mostly have the localrc contents right, and that maybe I just need to add some commands to the end of stack,sh to finish it off. Been frustrating seeing the VMs get launched only to not be able to ping them (but damn, that's pretty cool they spin up, don't you think?) I have a lot to learn still (just 2 weeks into this) but kinda stuck on this issue.

Regards,

syd

_______________________________________________
Mailing list: https://launchpad.net/~openstack<https://launchpad.net/%7Eopenstack>
Post to     : openstack@xxxxxxxxxxxxxxxxxxx<mailto:openstack@xxxxxxxxxxxxxxxxxxx>
Unsubscribe : https://launchpad.net/~openstack<https://launchpad.net/%7Eopenstack>
More help   : https://help.launchpad.net/ListHelp



Follow ups

References