yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #59163
[Bug 1645359] Re: Vlan Aware VM - after setting trunk status to down, all traffic works
The documentation for vlan trunking is still in flight [1], but from
there and the source documentation [2], you can see that the admin
status is meant to blocking the management of the trunk (e.g.
adding/removing subports) and it does not affect the data plane.
[1] https://review.openstack.org/#/c/361776/
[2] https://github.com/openstack/neutron/blob/master/neutron/services/trunk/constants.py#L16
** Tags added: trunk
** Changed in: neutron
Status: New => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1645359
Title:
Vlan Aware VM - after setting trunk status to down, all traffic works
Status in neutron:
Invalid
Bug description:
Newton,
openstack-neutron-ml2-9.1.0-6.el7ost.noarch
openstack-neutron-bigswitch-agent-9.40.0-1.1.el7ost.noarch
openstack-neutron-openvswitch-9.1.0-6.el7ost.noarch
openstack-neutron-common-9.1.0-6.el7ost.noarch
openstack-neutron-9.1.0-6.el7ost.noarch
[stack@undercloud-0 ~]$ openstack network trunk set trunk1 --disable
[stack@undercloud-0 ~]$ openstack network trunk list
+--------------------------------------+--------+--------------------------------------+-------------+
| ID | Name | Parent Port | Description |
+--------------------------------------+--------+--------------------------------------+-------------+
| 3b5c8493-7832-4501-a93f-65c5131512bb | trunk1 | 7d749eb0-633d-4114-b322-414d19f86046 | |
+--------------------------------------+--------+--------------------------------------+-------------+
[stack@undercloud-0 ~]$ openstack network trunk show trunk1
+-----------------+------------------------------------------------------------------------------------------------+
| Field | Value |
+-----------------+------------------------------------------------------------------------------------------------+
| admin_state_up | DOWN |
| created_at | 2016-11-28T11:45:54Z |
| description | |
| id | 3b5c8493-7832-4501-a93f-65c5131512bb |
| name | trunk1 |
| port_id | 7d749eb0-633d-4114-b322-414d19f86046 |
| project_id | af788c6da1fb4388b09040aa99c997bc |
| revision_number | 6 |
| status | ACTIVE |
| sub_ports | port_id='fdb9e48e-582f-43bd-b913-471711b3d2d2', segmentation_id='10', segmentation_type='vlan' |
| | port_id='488e4933-9809-4041-84ed-9375bf64333b', segmentation_id='20', segmentation_type='vlan' |
| tenant_id | af788c6da1fb4388b09040aa99c997bc |
| updated_at | 2016-11-28T15:07:25Z |
+-----------------+------------------------------------------------------------------------------------------------+
ping from all subinterfaces to qdhcps are successful
[root@vm-1 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1446 qdisc pfifo_fast state UP qlen 1000
link/ether fa:16:3e:62:e9:43 brd ff:ff:ff:ff:ff:ff
inet 192.168.0.55/24 brd 192.168.0.255 scope global dynamic eth0
valid_lft 86013sec preferred_lft 86013sec
inet6 fe80::f816:3eff:fe62:e943/64 scope link
valid_lft forever preferred_lft forever
3: eth0.10@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1446 qdisc noqueue state UP qlen 1000
link/ether fa:16:3e:ad:d5:39 brd ff:ff:ff:ff:ff:ff
inet 192.168.10.55/24 brd 192.168.10.255 scope global dynamic eth0.10
valid_lft 86022sec preferred_lft 86022sec
inet6 fe80::f816:3eff:fead:d539/64 scope link
valid_lft forever preferred_lft forever
4: eth0.20@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1446 qdisc noqueue state UP qlen 1000
link/ether fa:16:3e:5d:1e:73 brd ff:ff:ff:ff:ff:ff
inet 192.168.20.3/24 brd 192.168.20.255 scope global dynamic eth0.20
valid_lft 86011sec preferred_lft 86011sec
inet6 fe80::f816:3eff:fe5d:1e73/64 scope link
valid_lft forever preferred_lft forever
[root@vm-1 ~]# ping 192.168.0.1
PING 192.168.0.1 (192.168.0.1) 56(84) bytes of data.
64 bytes from 192.168.0.1: icmp_seq=1 ttl=64 time=0.594 ms
64 bytes from 192.168.0.1: icmp_seq=2 ttl=64 time=1.02 ms
^C
--- 192.168.0.1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.594/0.807/1.020/0.213 ms
[root@vm-1 ~]# ping 192.168.10.1
PING 192.168.10.1 (192.168.10.1) 56(84) bytes of data.
64 bytes from 192.168.10.1: icmp_seq=1 ttl=64 time=2.18 ms
^C
--- 192.168.10.1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 2.186/2.186/2.186/0.000 ms
[root@vm-1 ~]# ping 192.168.20.1
PING 192.168.20.1 (192.168.20.1) 56(84) bytes of data.
64 bytes from 192.168.20.1: icmp_seq=1 ttl=64 time=1.09 ms
64 bytes from 192.168.20.1: icmp_seq=2 ttl=64 time=0.835 ms
^C
--- 192.168.20.1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.835/0.964/1.093/0.129 ms
[root@vm-1 ~]#
To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1645359/+subscriptions
References