← Back to team overview

openstack team mailing list archive

Re: multi_host networking, but not on all nodes?


If you are using vlan mode, you can run multiple nova-networks.  Each project network will be randomly assigned to one host when it is instantiated.  There is no automatic failover from one node to another, but you could use the strategy that ntt designed before ha was available, esentially drbd and heartbeat. You could for example run 2 or 3 network nodes to spread the traffic, and pair each one with a hot failover.

Also, if you are worried about running ips there are tricky ways of giving out the same IP to all nova-networks for a given network instead of assigning a different ip to every host.  It is a one line change in network/manager.py, but it requires that you manually create ebtables rules to avoid ip conflicts.


On Feb 7, 2012, at 1:27 PM, Nathanael Burton wrote:

> With the default networking there's a single nova-network service.
> With the --multi_host option, 'set_network_host' sets every instance
> to use their host as the nova-network node, effectively requiring
> nova-network to run on every nova-compute.  The multi_host mode
> greatly helps HA and consolidates fault domains, but at the cost of
> increased complexity and IP sprawl when using the VLAN networking
> model, as each host in the zone now has to have an IP on every VLAN.
> What I think I'm looking for is a middle ground where you can run
> multiple nova-network nodes, but not equal to the number of compute
> nodes.  Basically a similar ability as implemented with the
> nova-volume service; the ability to scale the nova-network nodes
> independently from the computes.  The big downside is that you no
> longer have the benefit of combined fault domains (network/compute).
> Is any of this possible today?  Does Quantum with OpenvSwitch handle
> any of this either?
> Thoughts?
> Thanks,
> Nate
> _______________________________________________
> Mailing list: https://launchpad.net/~openstack
> Post to     : openstack@xxxxxxxxxxxxxxxxxxx
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp