yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #53816
[Bug 1602614] [NEW] DVR + L3 HA loss during failover is higher that it is expected
Public bug reported:
Scale environment 3 controllers 45 compute nodes. Mitaka, DVR + L3 HA
When active agent is stopped connection is established longer that it
does on the same environment for ha routers.
Steps to reproduce:
1. create 2 routers
neutron router-create router(1,2) --ha True --distributed True
2. Created 2 internal networks, one will be connected to router1, the second to router2
3. In each network was booted an instance
nova boot --image <image_id> --flavor <flavor_id> --nic net_id=<private_net_id> vm(1,2)
4. For vm in the 2d network was also assigned floating ip
5. Login into VM1 using ssh or VNC console
6. Start ping floating ip of 2d vm and check that packets are not lost
7. Check which agent is active for router1 with
neutron l3-agent-list-hosting-router <router_id>
8. Stop active l3 agent
9. Wait until another agent become active in neutron l3-agent-list-hosting-router <router_id>
10. Start stopped agent
11. Stop ping and check the number of packets that was lost.
12. Increase number of routers and repeat steps 5-10
Results for ha+dvr routers http://paste.openstack.org/show/531271/
Note, for ha routers number of loss packets in the same scenario is ~3.
** Affects: neutron
Importance: Undecided
Status: New
** Tags: l3-dvr-backlog l3-ha
** Description changed:
Scale environment 3 controllers 45 compute nodes. Mitaka, DVR + L3 HA
When active agent is stopped connection is established longer that it
does on the same environment for ha routers.
Steps to reproduce:
- 1. create 2 routers
+ 1. create 2 routers
neutron router-create router(1,2) --ha True --distributed True
2. Created 2 internal networks, one will be connected to router1, the second to router2
- 3. In each network was booted an instance
+ 3. In each network was booted an instance
nova boot --image <image_id> --flavor <flavor_id> --nic net_id=<private_net_id> vm(1,2)
4. For vm in the 2d network was also assigned floating ip
5. Login into VM1 using ssh or VNC console
6. Start ping floating ip of 2d vm and check that packets are not lost
- 7. Check which agent is active for router1 with
+ 7. Check which agent is active for router1 with
neutron l3-agent-list-hosting-router <router_id>
- 8. Stop active l3 agent
+ 8. Stop active l3 agent
9. Wait until another agent become active in neutron l3-agent-list-hosting-router <router_id>
- 10. Start stopped agent
+ 10. Start stopped agent
11. Stop ping and check the number of packets that was lost.
12. Increase number of routers and repeat steps 5-10
+ Results for ha+dvr routers http://paste.openstack.org/show/531271/
- I've got the following results for ha+dvr routers:
- +-------------+---------------------+----------------------+--------------------------+
- | Iteration | Number of routers | Command | Number of loss packets |
- +=============+=====================+======================+==========================+
- | 1 | 3 | ping 172.16.45.139 | 12 |
- +-------------+---------------------+----------------------+--------------------------+
- | 2 | 50 | | 48 |
- +-------------+---------------------+----------------------+--------------------------+
- | 3 | | | 21 |
- +-------------+---------------------+----------------------+--------------------------+
- | 4 | | | 18 |
- +-------------+---------------------+----------------------+--------------------------+
- | 5 | | | 20 |
- +-------------+---------------------+----------------------+--------------------------+
- | 6 | 100 | | 21 |
- +-------------+---------------------+----------------------+--------------------------+
- | 7 | | | 47 |
- +-------------+---------------------+----------------------+--------------------------+
- | 8 | | | 21 |
- +-------------+---------------------+----------------------+--------------------------+
- | 9 | | | 21 |
- +-------------+---------------------+----------------------+--------------------------+
- | 10 | | | 42 |
- +-------------+---------------------+----------------------+--------------------------+
- | 11 | | | 19 |
- +-------------+---------------------+----------------------+--------------------------+
- | 12 | | | 70 |
- +-------------+---------------------+----------------------+--------------------------+
Note, for ha routers number of loss packets in the same scenario is ~3.
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1602614
Title:
DVR + L3 HA loss during failover is higher that it is expected
Status in neutron:
New
Bug description:
Scale environment 3 controllers 45 compute nodes. Mitaka, DVR + L3 HA
When active agent is stopped connection is established longer that it
does on the same environment for ha routers.
Steps to reproduce:
1. create 2 routers
neutron router-create router(1,2) --ha True --distributed True
2. Created 2 internal networks, one will be connected to router1, the second to router2
3. In each network was booted an instance
nova boot --image <image_id> --flavor <flavor_id> --nic net_id=<private_net_id> vm(1,2)
4. For vm in the 2d network was also assigned floating ip
5. Login into VM1 using ssh or VNC console
6. Start ping floating ip of 2d vm and check that packets are not lost
7. Check which agent is active for router1 with
neutron l3-agent-list-hosting-router <router_id>
8. Stop active l3 agent
9. Wait until another agent become active in neutron l3-agent-list-hosting-router <router_id>
10. Start stopped agent
11. Stop ping and check the number of packets that was lost.
12. Increase number of routers and repeat steps 5-10
Results for ha+dvr routers http://paste.openstack.org/show/531271/
Note, for ha routers number of loss packets in the same scenario is ~3.
To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1602614/+subscriptions
Follow ups