← Back to team overview

yahoo-eng-team team mailing list archive

[Bug 1788023] Re: neutron does not form mesh tunnel overly between different ml2 driver.

 

Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
       Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1788023

Title:
  neutron does not form mesh tunnel overly between different ml2 driver.

Status in neutron:
  Won't Fix

Bug description:
  * Summary: neutron does not form mesh tunnel overly between different
  ml2 driver.

  * High level description: When using multiple neutron ml2/driver it is expected that vms on host with different ml2 backend should be able to comunicate as segmentatoin type/ids are centralised in neutron and are not backend specific. when using provider networks this work however when using vxlan
  or other tunneld network that require unicast mesh networks to be created fails.

  
  * Step-by-step reproduction steps: 
  deploy a multinode devstack with both linux bridge and ovs nodes.
  on the linux bridge nodes set the vxlan dest_udp port to the inan value
  so that it is the same port used by ovs.

  [[post-config|/etc/neutron/plugins/ml2/ml2_conf.ini]]
  [vxlan]
  udp_dstport=4789

  and set the vxlan  multi cast group to none to force unicast mode.

  [ml2_type_vxlan]
  vxlan_group=""

  boot a vm on the same neutron network on both a linux bridge node and
  ovs node.

  
  * Expected output: 
  in this case we would expect the ovs l2 agent to create
  a unicast vxlan tunnel port on br-tun between the ovs node and the linux bridge node.

  similarly we expect the linux bridge agent to configure the recipcal connection and update
  the forwarding table with the ovs enpoints.

  we would also expect the l2 agent on the ovs compute ndoe to create a vxlan tunnel port
  to the networking node where the dhcp server is running.

  when the vms are booted we would expect both vms to recive ips and security groups
  correctly congifured we expect both vms to be able to ping each other.


  * Actual output: 
  the ovs l2 agent only create unicast tunnels to other ovs nodes.
  i did not check if the linux bridge agent set up its side of the connecttion for
  ovs nodes but it did configure connectivy to other linux bridge nodes.

  as a result network connectivy was partionioned with no cross backend
  connectivity possible.

  this is different from the vlan and flat behaviour where network
  connectivity works as expected.

  
  * Version:
    ** rocky RC1 nova sha: afe4512bf66c89a061b1a7ccd3e7ac8e3b1b284d neutron sha: 1dda2bca862b1268c0f5ae39b7508f1b1cab6f15

    ** Centos 7.5
    ** DevStack
  * Environment: libvirt/kvm with default devstack config/service

  * Perceived severity: low (this prevents using hetrogeious backend with tunned networks
                             as a result, you cannot optimes some node for specifc workload. 
                             e.g. linux bridge has better multicast scaling but ovs has better perfromance

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1788023/+subscriptions



References