openstack team mailing list archive
-
openstack team
-
Mailing list archive
-
Message #21781
the ip_forward is enable when using vlan + multi_host on computer node
Hi all,
I am testing the nova-network + vlan + multi_host. But I found that the
ip_forward is enable automatically when launch new instances. You can check
the code
https://github.com/openstack/nova/blob/master/nova/network/linux_net.py#L770
I found there is some issue seriously when the ip_forward=1 on compute
node. Here my testing process
Controller:
[root@openstack-controller conf.d]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: p3p1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
link/ether 90:b1:1c:0d:87:79 brd ff:ff:ff:ff:ff:ff
inet 192.168.3.10/24 brd 192.168.3.255 scope global p3p1
inet6 fe80::92b1:1cff:fe0d:8779/64 scope link
valid_lft forever preferred_lft forever
3: em1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
link/ether 90:b1:1c:0d:87:7a brd ff:ff:ff:ff:ff:ff
inet 172.16.0.10/24 brd 172.16.0.255 scope global em1
inet6 fe80::92b1:1cff:fe0d:877a/64 scope link
valid_lft forever preferred_lft forever
Computer Node:
[root@openstack-node2 vlan]# ip a
2: em1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
link/ether 90:b1:1c:0d:73:ea brd ff:ff:ff:ff:ff:ff
inet 172.16.0.12/24 brd 172.16.0.255 scope global em1
inet6 fe80::92b1:1cff:fe0d:73ea/64 scope link
valid_lft forever preferred_lft forever
4: p3p1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
link/ether 00:10:18:f7:4a:34 brd ff:ff:ff:ff:ff:ff
inet 192.168.3.12/24 brd 192.168.3.255 scope global p3p1
inet 192.168.3.33/32 scope global p3p1
inet6 fe80::210:18ff:fef7:4a34/64 scope link
valid_lft forever preferred_lft forever
9: vlan102@em1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
noqueue state UP
link/ether fa:16:3e:54:ea:11 brd ff:ff:ff:ff:ff:ff
inet6 fe80::f816:3eff:fe54:ea11/64 scope link
valid_lft forever preferred_lft forever
10: br102: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
state UNKNOWN
link/ether fa:16:3e:54:ea:11 brd ff:ff:ff:ff:ff:ff
inet 10.0.102.4/24 brd 10.0.102.255 scope global br102
inet6 fe80::2816:24ff:feb5:5770/64 scope link
valid_lft forever preferred_lft forever
11: vlan103@em1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
noqueue state UP
link/ether fa:16:3e:3a:a0:20 brd ff:ff:ff:ff:ff:ff
inet6 fe80::f816:3eff:fe3a:a020/64 scope link
valid_lft forever preferred_lft forever
12: br103: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
state UNKNOWN
link/ether fa:16:3e:3a:a0:20 brd ff:ff:ff:ff:ff:ff
inet 10.0.103.4/24 brd 10.0.103.255 scope global br103
inet6 fe80::480c:f2ff:fe9b:a600/64 scope link
valid_lft forever preferred_lft forever
13: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
state UNKNOWN qlen 500
link/ether fe:16:3e:0c:65:73 brd ff:ff:ff:ff:ff:ff
inet6 fe80::fc16:3eff:fe0c:6573/64 scope link
valid_lft forever preferred_lft forever
15: vnet1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
state UNKNOWN qlen 500
link/ether fe:16:3e:7f:a2:d5 brd ff:ff:ff:ff:ff:ff
inet6 fe80::fc16:3eff:fe7f:a2d5/64 scope link
valid_lft forever preferred_lft forever
16: vnet2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
state UNKNOWN qlen 500
link/ether fe:16:3e:31:8f:7c brd ff:ff:ff:ff:ff:ff
inet6 fe80::fc16:3eff:fe31:8f7c/64 scope link
valid_lft forever preferred_lft forever
17: vnet3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
state UNKNOWN qlen 500
link/ether fe:16:3e:63:8c:e2 brd ff:ff:ff:ff:ff:ff
inet6 fe80::fc16:3eff:fe63:8ce2/64 scope link
valid_lft forever preferred_lft forever
[root@openstack-node2 vlan]# brctl show
bridge name bridge id STP enabled interfaces
br102 8000.fa163e54ea11 no vlan102
vnet0
vnet1
vnet2
br103 8000.fa163e3aa020 no vlan103
vnet3
virbr0 8000.525400aaa1b5 yes virbr0-nic
if the ip_forward=1, then vm1(vnet1) can ping vm2(vnet4) and controller can
ping vm1(vnet1) and vm2(vnet4). this should be wrong.
Any body meet this error? and how to fix this except for changing the code.
--
Lei Zhang
Blog: http://jeffrey4l.github.com
twitter/weibo: @jeffrey4l
Follow ups