← Back to team overview

yahoo-eng-team team mailing list archive

[Bug 2106357] [NEW] Loadbalancer member list show operating status as ONLINE when member is created for a SHUTOFF state vm

 

Public bug reported:

Step to reproduce the issue.
1. Create a Load Balancer with:
   A listener on port 80 using TCP protocol
   A pool with health monitoring enabled
2. Add a VM (which is Active) to the pool as a member:
   The load balancer works correctly
   The health monitor status is accurate (shows the VM as online)
3. Now add a second VM to the pool, but this VM is in Stopped state:
    The health monitor still shows this VM as online
    The load balancer continues to send traffic to the stopped VM

The stopped VM should not receive any traffic, but it does.
This breaks the expected behavior of the load balancer because:

The health monitor is not correctly detecting that the VM is down
Traffic is being routed to a non-functional backend

On ovn-sb db it show as 
root@ovn-ovsdb-sb-0:/# ovn-sbctl list service_monitor
_uuid               : 3e1e2b40-1a5b-4ae4-b1a4-a170b76517ec
external_ids        : {}
ip                  : "192.168.0.11"
logical_port        : "635b68bc-5339-46fb-890d-d8a0ab341b50"
options             : {failure_count="3", interval="10", success_count="3", timeout="5"}
port                : 80
protocol            : tcp
src_ip              : "192.168.0.194"
src_mac             : "56:17:53:f0:6b:7f"
status              : online

_uuid               : dfcc8077-919e-41a3-8091-d9ea03759ac6
external_ids        : {}
ip                  : "192.168.0.218"
logical_port        : "38c022fb-bc84-49d6-9ad1-81a4d572f17e"
options             : {failure_count="3", interval="10", success_count="3", timeout="5"}
port                : 80
protocol            : tcp
src_ip              : "192.168.0.194"
src_mac             : "56:17:53:f0:6b:7f"
status              : []

opnestack CLI shows

openstack loadbalancer member list lb2-pool
+--------------------------------------+------+----------------------------------+---------------------+---------------+---------------+------------------+--------+
| id                                   | name | project_id                       | provisioning_status | address       | protocol_port | operating_status | weight |
+--------------------------------------+------+----------------------------------+---------------------+---------------+---------------+------------------+--------+
| 5eb42fb9-2456-4727-af9e-8e69294672d0 | mem2 | 8d9a93312d624938bff32dda4bd8ae97 | ACTIVE              | 192.168.0.11  |            80 | ONLINE           |      1 |
| 49be36a6-dbfa-43ea-9e5a-61d187ccc80d | mem3 | 8d9a93312d624938bff32dda4bd8ae97 | ACTIVE              | 192.168.0.218 |            80 | ONLINE           |      1 |
+--------------------------------------+------+----------------------------------+---------------------+---------------+---------------+------------------+--------+

** Affects: neutron
     Importance: Undecided
         Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2106357

Title:
  Loadbalancer member list show operating status as ONLINE when member
  is created for a SHUTOFF state vm

Status in neutron:
  New

Bug description:
  Step to reproduce the issue.
  1. Create a Load Balancer with:
     A listener on port 80 using TCP protocol
     A pool with health monitoring enabled
  2. Add a VM (which is Active) to the pool as a member:
     The load balancer works correctly
     The health monitor status is accurate (shows the VM as online)
  3. Now add a second VM to the pool, but this VM is in Stopped state:
      The health monitor still shows this VM as online
      The load balancer continues to send traffic to the stopped VM

  The stopped VM should not receive any traffic, but it does.
  This breaks the expected behavior of the load balancer because:

  The health monitor is not correctly detecting that the VM is down
  Traffic is being routed to a non-functional backend

  On ovn-sb db it show as 
  root@ovn-ovsdb-sb-0:/# ovn-sbctl list service_monitor
  _uuid               : 3e1e2b40-1a5b-4ae4-b1a4-a170b76517ec
  external_ids        : {}
  ip                  : "192.168.0.11"
  logical_port        : "635b68bc-5339-46fb-890d-d8a0ab341b50"
  options             : {failure_count="3", interval="10", success_count="3", timeout="5"}
  port                : 80
  protocol            : tcp
  src_ip              : "192.168.0.194"
  src_mac             : "56:17:53:f0:6b:7f"
  status              : online

  _uuid               : dfcc8077-919e-41a3-8091-d9ea03759ac6
  external_ids        : {}
  ip                  : "192.168.0.218"
  logical_port        : "38c022fb-bc84-49d6-9ad1-81a4d572f17e"
  options             : {failure_count="3", interval="10", success_count="3", timeout="5"}
  port                : 80
  protocol            : tcp
  src_ip              : "192.168.0.194"
  src_mac             : "56:17:53:f0:6b:7f"
  status              : []

  opnestack CLI shows

  openstack loadbalancer member list lb2-pool
  +--------------------------------------+------+----------------------------------+---------------------+---------------+---------------+------------------+--------+
  | id                                   | name | project_id                       | provisioning_status | address       | protocol_port | operating_status | weight |
  +--------------------------------------+------+----------------------------------+---------------------+---------------+---------------+------------------+--------+
  | 5eb42fb9-2456-4727-af9e-8e69294672d0 | mem2 | 8d9a93312d624938bff32dda4bd8ae97 | ACTIVE              | 192.168.0.11  |            80 | ONLINE           |      1 |
  | 49be36a6-dbfa-43ea-9e5a-61d187ccc80d | mem3 | 8d9a93312d624938bff32dda4bd8ae97 | ACTIVE              | 192.168.0.218 |            80 | ONLINE           |      1 |
  +--------------------------------------+------+----------------------------------+---------------------+---------------+---------------+------------------+--------+

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2106357/+subscriptions