← Back to team overview

yahoo-eng-team team mailing list archive

[Bug 1585165] Re: floating ip not reachable after vm migration

 

Reviewed:  https://review.openstack.org/327551
Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=a1f06fd707ffe663e09f2675316257c8dc528d47
Submitter: Jenkins
Branch:    master

commit a1f06fd707ffe663e09f2675316257c8dc528d47
Author: rossella <rsblendido@xxxxxxxx>
Date:   Wed Jun 8 17:18:51 2016 +0200

    After a migration clean up the floating ip on the source host
    
    When a VM is migrated that has a floating IP associated, the L3
    agent on the source host should be notified when the migration
    is over. If the router on the source host is not going to be
    removed (there are other ports using it) then we should nofity
    that the floating IP needs to be cleaned up.
    
    Change-Id: Iad6fbad06cdd33380ef536e6360fd90375ed380d
    Closes-bug: #1585165


** Changed in: neutron
       Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1585165

Title:
  floating ip not reachable after vm migration

Status in neutron:
  Fix Released

Bug description:
  On a cloud running Liberty, a VM is assigned a floating IP. The VM is
  live migrated and the floating IP is no longer reachable from outside
  the cloud. Steps to reproduce:

  1) spawn a VM
  2) assign a floating IP
  3) live migrate the VM
  4) ping the floating IP from outside the cloud

  the problem seems to be that both the node that was hosting the VM
  before the migration and the node that hosts it now answers the ARP
  request:

  admin:~ # arping -I eth0 10.127.128.12 
  ARPING 10.127.128.12 from 10.127.0.1 eth0
  Unicast reply from 10.127.128.12 [FA:16:3E:C8:E6:13]  305.145ms
  Unicast reply from 10.127.128.12 [FA:16:3E:45:BF:9E]  694.062ms
  Unicast reply from 10.127.128.12 [FA:16:3E:45:BF:9E]  0.964ms

  on the compute that was hosting the VM:

  root:~ # sudo ip netns exec fip-c622fafe-c663-456a-8549-ebd3dbed4792 ip route
  default via 10.127.0.1 dev fg-c100b010-af 
  10.127.0.0/16 dev fg-c100b010-af  proto kernel  scope link  src 10.127.128.3 
  10.127.128.12 via 169.254.31.28 dev fpr-7d1a001a-9 

  On the node that it's hosting the VM:

  root:~ # sudo ip netns exec fip-c622fafe-c663-456a-8549-ebd3dbed4792 ip route
  default via 10.127.0.1 dev fg-e532a13f-35 
  10.127.0.0/16 dev fg-e532a13f-35  proto kernel  scope link  src 10.127.128.8 9 
  10.127.128.12 via 169.254.31.28 dev fpr-7d1a001a-9 

  the entry "10.127.128.12" is present in both nodes.  That happens
  because when the VM is migrated no clean up is triggered on the source
  host. Restarting the l3 agent fixes the problem because the stale
  entry is removed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1585165/+subscriptions


References