← Back to team overview

nova team mailing list archive

Re: Network filtering for libvirt and for non-libvirt hypervisors

 

Hi Soren, responses inline.  Thanks,

Dan

On Mon, Sep 20, 2010 at 2:21 PM, Soren Hansen <soren@xxxxxxxxxx> wrote:

>
> >
> > 2) A common mechanism for primary-secondary failover in some clouds
> > is to run two VMs that both provide a particular service and have a
> > "fail-over" IP that can float between the two.
> > If the secondary fails to get heartbeat responses from the primary,
> > it will "steal" the fail-over IP address, for example, using a
> > gratuitous ARP with its MAC address for the fail-over IP. This
> > technique is definitely common in clouds with per-tenant private
> > network, but I believe it is also supported in some clouds with a
> > flatter networking model (in fact, I believe that Rackspace provides
> > such a capability:
> >
> http://cloudservers.rackspacecloud.com/index.php/IP_Failover_-_High_Availability_Explained
> ).
>
> As far as I understand these use cases, Rackspace's IP group
> functionality should acommodate both.  Both are essentially about
> allowing more than one VM to transmit packets with a given IP, right?
>
> I think what I'm getting at is that I'd much rather have the API exposed
> by Nova be rich and flexible enough that theses sort of things can be
> exposed there rather than require per-vm tweaking or, even worse,
> changing global configuration.
>
> > In this case, limiting a VM to using a single IP for ARP and IP would
> > be prohibitive, though MAC filters would be fine.
>
> Certainly. Implementing Rackspace's IP groups would certainly require
> changes to the model introduced here.
>


I think we're on the same page here.  All I am proposing is that we want to
be sure not to "bake" into the platform an assumption that each host can
only send with a single source IP address that is enforced at the hypervisor
layer, since the different use cases may require more flexibility than
that.



>
> > 3) The final scenario is a case where customers may want to control
> > their own IP addressing, and thus the cloud provider and hypervisor
> > layer does not know what IP addresses to enforce for IP / ARP
> > traffic.   This applies only in the context of private per-tenant
> > network, for example, if a customer wishes to connect a network in
> > the cloud to a network on their on premises, or simply wants to use
> > the same RFC 1918 space they used in the data center before
> > transitioning an app to the cloud.
>
> I'm kind of on the fence on this one. For instance, on the Rackspace
> cloud, you get two interfaces: One with a public IP, one with a private
> (in the RFC 1918 sense) one. The private interface is not separated from
> other users[1], it's simply an extra interface through which you can
> reach (I suppose) some internal Rackspace services.
>
> I think it could make good sense to have an API call to create an extra
> network with a self-chosen IP-range and have another API call to add an
> interface connected to said network to VM's. This part of the API would
> only be exposed if the network model had a way to keep users' networks
> segregated.
>


Agreed, Rackspace's current "private" interface isn't really private in the
sense that I am using here.

I agree with the concept of letting a tenant create one or more isolated
networks.  Where you envisioning that IP + MAC filtering would be a strict
requirement for such networks as well?  I would advocate that it is not.

Here are some of the scenarios where I commonly see isolated tenant networks
in the cloud:

1) the cloud provider uses a NAT router-VM or Load-Balancer to connect
private tenant networks containing host VMs to the public network used for
external access (e.g., the CloudStack approach)

2) there is a multi-tier web application which uses a private backend
network to communicate between the www and DB tiers.

3) a tenant migrates apps from an internal datacenter to the cloud but wants
those VMs to be on a network that's only network access is via a VPN
connection back to the customer premesis.

In these scenarios, it can be significantly easier if the customer can just
choose their own private IP addresses without having to check-with/inform
the cloud provider.  It may be that the tenant is moving an existing set of
applications from their own datacenter to the cloud and not want to
reconfigure all of the hosts to use new addresses.  Instead it can just use
a LB/NAT to map new public IPs to old private ones.  Or, in the VPN case,
the tenant may be want to use the DHCP server running on the customer
premesis to assign VM addresses, since those VMs should have IP addresses
that are "internal" to the customer premises network.  Its worth noting as
well that with such private networks, a customer could implement the
primary-secondary HA technique without requiring a special cloud API for 'IP
groups'.

To me, the main point is that for a cloud networking model where VMs from
multiple tenants can exist in the same L2 segment (e.g., Amazon, Rackspace),
having a mechanism to limit a VM to using a set of one or more MACs and IPs
is very important.  But if a tenant gets one or more of their own isolated
L2 networks, there are cases where it seems unnecessary and potentially
cumbersome to require that the hypervisor be able to known all valid IPs a
host may use.

To make this discussion more concrete, it might be worth enumerating the
goals of preventing MAC + IP spoofing.  The obvious issues of
man-in-the-middle attacks are strict requirements where multiple tenants
share an L2 segment, but seem optional when each tenant gets their own
network.  The issue of a VM potentially blasting DoS traffic with a spoofed
IP address seems relevant too.  Scenario #2 and #3 above do not involve
public network connectivity, and for #1 it seems that what is necessary is
only to enforce the public IP addresses used by a NAT or LB VM.




>
> [1]:
>
> http://www.rackspacecloud.com/blog/2010/08/31/private-network-interfaces-the-forgotten-security-hole/
>
> --
> Soren Hansen
> Ubuntu Developer    http://www.ubuntu.com/
> OpenStack Developer http://www.openstack.org/
>



-- 
~~~~~~~~~~~~~~~~~~~~~~~~~~~
Dan Wendlandt
Nicira Networks, Inc.
Sr. Product Manager
cell: 650-906-2650
~~~~~~~~~~~~~~~~~~~~~~~~~~~

Follow ups

References