← Back to team overview

openstack team mailing list archive

Re: Decoupling of Network and Compute services for the new Network Service design

 

I agree, this is exactly where we want to take the network services for
OpenStack. The goal should be to decouple Compute from Network, with an eye
toward a project separation post-Cactus (this should have a lot of
discussion at the next design summit). For Cactus we have explicitly kept
the network manager (and the volume manager) inside of Nova in order to
minimize risk to stability for this release. For Diablo I think we need to
identify any of the dependencies and touchpoints that Compute has on Network
and make a clean separation. Ryu Ishimoto has made a good first step, we
need to identify any issues with all the possible network configurations.

 

Following up on the other big networking thread, I would like to see a
project schema that includes the core networking API, network
manager/controller, and plug-in interfaces. Additionally, we should identify
the "sub-projects" that can be optional networking components (such as VPN,
DHCP, etc.).

 

Separate from networking we need to do the same exercise for the volume
manager and block storage systems.

 

Thanks,

 

John

 

From: openstack-bounces+john=openstack.org@xxxxxxxxxxxxxxxxxxx
[mailto:openstack-bounces+john=openstack.org@xxxxxxxxxxxxxxxxxxx] On Behalf
Of Dan Wendlandt
Sent: Wednesday, February 23, 2011 7:49 AM
To: Ishimoto, Ryu
Cc: openstack@xxxxxxxxxxxxxxxxxxx
Subject: Re: [Openstack] Decoupling of Network and Compute services for the
new Network Service design

 

I think this is very much inline with what we've been thinking.  To me,
providing a clean and generic programming interface that decouples the
network functionality from the existing nova stack is a first step in
creating a standalone network service.  

 

Also, I am not sure if this is implied by step #3 below, but it seems that
the compute and network service will need to share some identifier so that
the network entity running on the compute node can "recognize" a VM
interface and associate it with a vPort.  For example, each vNIC has an
identifier assigned by the compute service, a call to the network service
associates that vNIC id with a vPort, and when the compute node creates a
device (e.g., tap0), it tells the network plugin on the host the vNIC id for
that device (there are several other possible variations on this theme...).
In your example below this may not be strictly required because all vNICs
get connected to the same network, but in a general model for a network
service this will be required.  

 

dan

On Wed, Feb 23, 2011 at 5:29 AM, Ishimoto, Ryu <ryu@xxxxxxxxxxx> wrote:

 

Hi everyone,

 

I have been following the discussion regarding the new 'pluggable' network
service design, and wanted to drop in my 2 cents ;-)

 

Looking at the current implementation of Nova, there seems to be a very
strong coupling between compute and network services.  That is, tasks that
are done by the network service are executed at the time of VM
instantiation, making the compute code dependent on the network service, and
vice versa.  This dependency seems undesirable to me as it adds restrictions
to implementing 'pluggable' network services, which can vary, with many ways
to implement them.

 

Would anyone be opposed to completely separating out the network service
logic from compute?  I don't think it's too difficult to accomplish this,
but to do so, it will require that the network service tasks, such as IP
allocation, be executed by the user prior to instantiating the VM.  

 

In the new network design(from what I've read up so far), there are concepts
of vNICs, and vPorts, where vNICs are network interfaces that are associated
with the VMs, and vPorts are logical ports that vNICs are plugged into for
network connectivity.  If we are to decouple network and compute services,
the steps required for FlatManager networking service would look something
like:

 

1. Create ports for a network.  Each port is associated with an IP address
in this particular case, since it's an IP-based network.

2. Create a vNIC

3. Plug a vNIC into an avaiable vPort.  In this case it just means mapping
this vNIC to an unused IP address.

4. Start a VM with this vNIC.  vNIC is already mapped to an IP address, so
compute does not have to ask the network service to do any IP allocation. 

 

In this simple example, by removing the request for IP allocation from
compute, the network service is no longer needed during the VM
instantiation.  While it may require more steps for the network setup in
more complex cases, it would still hold true that, once the vNIC and vPort
are mapped, compute service would not require any network service during the
VM instantiation.

 

IF there is still a need for the compute to access the network service,
there is another way.  Currently, the setup of the network
environment(bridge, vlan, etc) is all done by the compute service. With the
new network model, these tasks should either be separated out into a
standalone service('network agent') or at least be separated out into
modules with generic APIs that the network plugin providers can implement.
By doing so, and if we can agree on a rule that the compute service must
always go through the network agent to access the network service, we can
still achieve the separation of compute from network services.   Network
agents should have full access to the network service as they are both
implemented by the same plugin provider.  Compute would not be aware of the
network agent accessing the network service.

 

With this design, the network service is only tied to the network REST API
and the network agent, both of which are implemented by the plugin
providers.  This would allow them to implement their network service without
worrying about the details of the compute service.

 

Please let me know if all this made any sense. :-)  Would love to get some
feedbacks.

 

Regards,

Ryu Ishimoto

 


_______________________________________________
Mailing list: https://launchpad.net/~openstack
Post to     : openstack@xxxxxxxxxxxxxxxxxxx
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp




-- 
~~~~~~~~~~~~~~~~~~~~~~~~~~~
Dan Wendlandt 
Nicira Networks, Inc. 
www.nicira.com | www.openvswitch.org
Sr. Product Manager 
cell: 650-906-2650
~~~~~~~~~~~~~~~~~~~~~~~~~~~


References