← Back to team overview

yahoo-eng-team team mailing list archive

[Bug 1908957] [NEW] iptable rules collision deployed with k8s iptables kube-proxy enabled

 

Public bug reported:


Maybe it's a k8s kube-proxy related bug, but maybe it is easier to solve on neutron's side...

In k8s either NodePort or ExternalIP will generate iptable rules which will effect vm traffic when
hybrid iptable plugin enabled.

The problem is:

Chain PREROUTING (policy ACCEPT 650 packets, 65873 bytes)
 pkts bytes target     prot opt in     out     source               destination         
 560K   37M ACCEPT     all  --  *      *       0.0.0.0/0            0.0.0.0/0            PHYSDEV match --physdev-is-in
  56M 4944M KUBE-SERVICES  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes service portals */
  40M 3785M KUBE-SERVICES  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes service portals */
  40M 3785M KUBE-SERVICES  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes service portals */
  40M 3785M KUBE-SERVICES  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes service portals */
  40M 3785M KUBE-SERVICES  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes service portals */
  40M 3785M KUBE-SERVICES  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes service portals */
  40M 3785M KUBE-SERVICES  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes service portals */
  40M 3785M KUBE-SERVICES  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes service portals */
  40M 3785M KUBE-SERVICES  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes service portals */
  40M 3785M KUBE-SERVICES  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes service portals */
  40M 3785M KUBE-SERVICES  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes service portals */

And packets will be DNAT to something which we do not want and such
traffic will be dropped in the end.

By adding the following rule it seems problem is mitigated,

iptables -t nat -I PREROUTING 2 -m physdev --physdev-is-in  -j ACCEPT

** Affects: neutron
     Importance: Undecided
         Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1908957

Title:
  iptable rules collision deployed with k8s iptables kube-proxy enabled

Status in neutron:
  New

Bug description:
  
  Maybe it's a k8s kube-proxy related bug, but maybe it is easier to solve on neutron's side...

  In k8s either NodePort or ExternalIP will generate iptable rules which will effect vm traffic when
  hybrid iptable plugin enabled.

  The problem is:

  Chain PREROUTING (policy ACCEPT 650 packets, 65873 bytes)
   pkts bytes target     prot opt in     out     source               destination         
   560K   37M ACCEPT     all  --  *      *       0.0.0.0/0            0.0.0.0/0            PHYSDEV match --physdev-is-in
    56M 4944M KUBE-SERVICES  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes service portals */
    40M 3785M KUBE-SERVICES  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes service portals */
    40M 3785M KUBE-SERVICES  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes service portals */
    40M 3785M KUBE-SERVICES  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes service portals */
    40M 3785M KUBE-SERVICES  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes service portals */
    40M 3785M KUBE-SERVICES  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes service portals */
    40M 3785M KUBE-SERVICES  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes service portals */
    40M 3785M KUBE-SERVICES  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes service portals */
    40M 3785M KUBE-SERVICES  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes service portals */
    40M 3785M KUBE-SERVICES  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes service portals */
    40M 3785M KUBE-SERVICES  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes service portals */

  And packets will be DNAT to something which we do not want and such
  traffic will be dropped in the end.

  By adding the following rule it seems problem is mitigated,

  iptables -t nat -I PREROUTING 2 -m physdev --physdev-is-in  -j ACCEPT

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1908957/+subscriptions


Follow ups