← Back to team overview

fuel-dev team mailing list archive

Re: [fuel] multiple L3 routes

 

Hi

Before start can we discuss more general networking model in Fuel? 
I mean why we use enormous number of VLANs for: management, storage, floating, private, admin networks. In most cases customers will need only 2 or 3 networks: 
- private network for communication between nodes (setup overlay networks, openstack communication like rabbitmq, mysql, storage etc.) , 
- public/floating to expose API, horizon and of course Floating_ip for VMs (on Controller Nodes only), 
- Admin for PXE and Fuel development (additional reuse Admin network as Private network after deployment)

It will be reducing too complicated the OVS configuration and will be easier to develop and more flexible to setup for Customers.
There is no need to provide multiple static (or dynamic e.g.OSPF) routes per network in separate routing table. Only default route should be enough (maybe one static route extra).  

Additional Fuel will be ready to setup nodes in L3 on Top of the Rack environment without custom solutions or adding L3 routing. 

Next network related topics to discuss (base on a customer feedback):
- Fuel the admin network not a single layer 2 network (DHCP relay on ToR Switches option)
- OpenStack Controlers in HA mode, not in one single layer 2 network (how to solve problems of HAProxy and VIPs in one L2 network - VLAN) - external LoadBlancer, using DNS LB
- Storage hosts packets CoS marking, when there is no extra storage network (VLAN)
- Fuel admin network reuse after OpenStack deployment.

I will provide diagrams how I see it soon.

-Przemek

On Feb 11, 2014, at 3:31 PM, Mike Scherbakov <mscherbakov@xxxxxxxxxxxx> wrote:

> + Przemek
> 
> 
> On Tue, Feb 11, 2014 at 10:53 AM, Andrew Woodward <xarses@xxxxxxxxx> wrote:
> Vladimir,
> 
> The bp is https://blueprints.launchpad.net/fuel/+spec/multiple-cluster-networks Ryan created it a couple of days ago. We will work on getting some diagrams together to help visualize. The short of it is that because customer want's to use separate network segments between racks, cages or what ever is their fancy (contained hopefully by near proximity in the same facility). This type of network topology is usually referred to as spine and leaf and has become more common in larger datacenters as it minimizes L2 domains and increases bandwidth throughput and L3 self healing which is not as easily possible in a large L2 domain. Because customer requires to deploy this way, fuel would need support storing and processing many sets of the networks we would normally deploy (fuelweb_admin, public, management, storage). In our BP we actually propose moving the networks away from being bound directly to the clusters and instead, we bind them to the node (calling them Node Groups). This doubles as a method that in the case the customer wants to place nodes in multiple clusters that belong to the same Node Group, they would only have to define the Node Group once.
> 
> We have a working beta in our forked branches:
> https://github.com/xarses/fuel-web
> https://github.com/xarses/fuel-library
> 
> Setup of a lab env is a bit tricky and I will work to make fuel-master be able to make it automatically.
> 
> As to the thread's original question, we solved the problem by creating rule based routes for each interface to return traffic in https://github.com/xarses/fuel-library/commit/ef70dafae8fa8096526691910a2e1cf148d58bf1
> 
> 
> 
> -- 
> Mike Scherbakov
> #mihgen


References