openstack team mailing list archive
-
openstack team
-
Mailing list archive
-
Message #14275
Re: inter-tenant and VM-to-bare-metal communication policies/restrictions.
I am also very interesting about this and also try to find a way to forbid
the talking between VMs on same compute+network node. J
Romi
发件人: openstack-bounces+romizhang1968=163.com@xxxxxxxxxxxxxxxxxxx
[mailto:openstack-bounces+romizhang1968=163.com@xxxxxxxxxxxxxxxxxxx] 代表
Christian Parpart
发送时间: 2012年7月5日 星期四 23:48
收件人: <openstack@xxxxxxxxxxxxxxxxxxx>
主题: [Openstack] inter-tenant and VM-to-bare-metal communication
policies/restrictions.
Hi all,
I am running multiple compute nodes and a single nova-network node, that is
to act
as a central gateway for the tenant's VMs.
However, since this nova-network node (of course) knows all routes, every VM
of
any tenant can talk to each other, including to the physical nodes, which
I highly disagree with and would like to restrict that. :-)
root@gw1:~# ip route show
default via $UPLINK_IP dev eth1 metric 100
10.10.0.0/19 dev eth0 proto kernel scope link src 10.10.30.5
10.10.40.0/21 dev br100 proto kernel scope link src 10.10.40.1
10.10.48.0/24 dev br101 proto kernel scope link src 10.10.48.1
10.10.49.0/24 dev br102 proto kernel scope link src 10.10.49.1
$PUBLIC_NET/28 dev eth1 proto kernel scope link src $PUBLIC_IP
192.168.0.0/16 dev eth0 proto kernel scope link src 192.168.2.1
- 10.10.0.0/19 is the network for bare metal nodes, switches, PDUs, etc.
- 10.10.40.0/21(br100) is the "production" tenant
- 10.10.48.0/24 (br101) is the "staging" tenant
- 10.10.49.0/24 (br102) is the "playground" tenant.
- 192.168.0.0/16 is the legacy network (management and VM nodes)
No tenant's VM shall be able to talk to a VM of another tenant.
And ideally no tenant's VM should be able to talk to the management
network either.
Unfortunately, since we're migrating a live system, and we also have
production services on the bare-metal nodes, I had to add special routes
to allow the legacy installations to communicate to the new "production"
VMs for the transition phase. I hope I can remove that ASAP.
Now, checking iptables on the nova-network node:
root@gw1:~# iptables -t filter -vn -L FORWARD
Chain FORWARD (policy ACCEPT 64715 packets, 13M bytes)
pkts bytes target prot opt in out source
destination
36M 29G nova-filter-top all -- * * 0.0.0.0/0 0.
0.0.0/0
36M 29G nova-network-FORWARD all -- * * 0.0.0.0/0
0.0.0.0/0
root@gw1:~# iptables -t filter -vn -L nova-filter-top
Chain nova-filter-top (2 references)
pkts bytes target prot opt in out source
destination
36M 29G nova-network-local all -- * * 0.0.0.0/0
0.0.0.0/0
root@gw1:~# iptables -t filter -vn -L nova-network-local
Chain nova-network-local (1 references)
pkts bytes target prot opt in out source
destination
root@gw1:~# iptables -t filter -vn -L nova-network-FORWARD
Chain nova-network-FORWARD (1 references)
pkts bytes target prot opt in out source
destination
0 0 ACCEPT all -- br102 * 0.0.0.0/0
0.0.0.0/0
0 0 ACCEPT all -- * br102 0.0.0.0/0
0.0.0.0/0
0 0 ACCEPT udp -- * * 0.0.0.0/0
10.10.49.2 udp dpt:1194
18M 11G ACCEPT all -- br100 * 0.0.0.0/0
0.0.0.0/0
18M 18G ACCEPT all -- * br100 0.0.0.0/0
0.0.0.0/0
0 0 ACCEPT udp -- * * 0.0.0.0/0
10.10.40.2 udp dpt:1194
106K 14M ACCEPT all -- br101 * 0.0.0.0/0
0.0.0.0/0
79895 23M ACCEPT all -- * br101 0.0.0.0/0
0.0.0.0/0
0 0 ACCEPT udp -- * * 0.0.0.0/0
10.10.48.2 udp dpt:1194
Now I see, that all traffic from tenant "staging" (br101) for example allows
any traffic from/to any destination (-j ACCEPT).
I'd propose to reduce this limitation to the public gateway interface (eth1
in my case) and that this value
shall be configurable in the nova.conf file.
Is there any other thing, I might have overseen, to disallow inter-tenant
communication and to disallow
tenant-VM-to-bare-metal communication?
Many thanks in advance,
Christian Parpart.
Follow ups
References