← Back to team overview

yahoo-eng-team team mailing list archive

[Bug 1685881] Re: l3-agent-router-add doesn't error/warn about router already existing on agent

 

@Drew,

Thanks. Since this is not causing continuous operational issues, I am
going to mark it invalid. If you feel we should pursue further, please
feel free to change it back

** Changed in: neutron
       Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1685881

Title:
  l3-agent-router-add doesn't error/warn about router already existing
  on agent

Status in OpenStack neutron-api charm:
  New
Status in neutron:
  Invalid

Bug description:
  we had an incident on a network that ended up with random packet
  dropping between nodes within the cloud, and outside of cloud when
  crossing l3-routers.

  Steps to reproduce:
  juju set neutron-api min-agents-per-router=2
  juju set neutron-api max-agents-per-router=2
  juju set neutron-api l2-population=false
  juju set neutron-api enable-l3ha=true
  for i in $(neutron router-list -f value -c id); do
      neutron router-update $i --admin-state=up=false
      neutron router-update $i --ha=true
      neutron router-update $i --admin-state=up=true
  done
  juju set neutron-api max-agents-per-router=3
  neutron
  for i in $(neutron router-list -f value -c id); do
    neutron l3-agent-list-hosting-router $i
    for j in $(neutron agent-list -f value -c id); do
      neutron l3-agent-router-add $j $i
    done
  done
  sleep 120 #for settle
  for i in $(neutron router-list -f value -c id); do
    neutron l3-agent-list-hosting-router $i
  done

  Potentially you may see two active l3-agents for a given router.  (We
  saw this corresponded to rabbitmq messaging failures concurrent with
  this activity).  Our environment had 9 active routers.

  You'll notice that there's no error that comes out of adding a router
  to an agent it's already running on.

  After making these updates, we found that ssh and RDP sessions to the
  floating IPs associated with VMs across several different
  networks/routers were exhibiting random session drops as if the route
  were hosted in multiple locations and we were getting an asymmetric
  route issue.

  We had to revert to --ha=false and enable-l3ha=false before we could
  gather deeper info/SOS reports.  May be able to reproduce in lab at
  some point in the future.

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-neutron-api/+bug/1685881/+subscriptions