← Back to team overview

openstack team mailing list archive

Re: Decoupling of Network and Compute services for the new Network Service design

 

On 2/23/11 12:26 PM, Vishvananda Ishaya wrote:
Agreed that this is the right way to go.

We need some sort of supervisor to tell the network to allocate the network before dispatching a message to compute.  I see three possibilities (from easiest to hardest):

1. Make the call in /nova/compute/api.py (this code runs on the api host)
2. Make the call in the scheduler (the scheduler then becomes sort of a supervisor to make sure all setup occurs for a vm to launch)
3. Create a separate compute supervisor that is responsible for managing the calls to different components

The easiest seems to be 1, but unfortunately it forces us to wait for the network allocation to finish before returning to the user which i dislike.

I think ultimately 3 is probably the best solution, but for now I suggest 2 as a middle ground between easy and best.
Actually, thinking  about this......
What if we had some concept of a 'tasklist' of some sort?
The scheduler would handle this, looking at the first non-completed task on the list, and dispatching it. Possibly, each task could pop things on/off the list too, and include result data for complete and/or failed tasks.

Possibly this could work like:

- Api generates a one task tasklist: 'gimme an instance w/ flavor x, requirement y ...'
- Scheduler dispatches this to a compute node.
- Compute node does some prep work (mebbe allocating an instance_id, or somesuch),
     and returns back the tasklist looking kindof like:
     """
1. gimme an instance w/ flavor x, requirement y ...: [Done, instance_id = 1234]
           2. allocate network  for instance1234
           3. build instance1234
     """
- Scheduler looks at next task on list, and dispatches to a network worker.
- Network worker does magic, and returns tasklist:
    """
1. gimme an instance w/ flavor x, requirement y ...: [Done, instance_id = 1234] 2. allocate network for instance1234: [Done network_info=<stuff here...>]
           3. build instance1234
     """
- Scheduler looks at next task, dispatches to compute worker.
- Compute worker actually builds instance, with network info as allocated.
** tasklist done.

(This could also allow for retries. A worker could just return the tasklist with a soft error on that task. The scheduler would see the same task as the top one, and would reschedule that task, (but could use the info that it failed on host x to tell it not to send it there again) )

Vish

On Feb 23, 2011, at 5:29 AM, Ishimoto, Ryu wrote:

Hi everyone,

I have been following the discussion regarding the new 'pluggable' network service design, and wanted to drop in my 2 cents ;-)

Looking at the current implementation of Nova, there seems to be a very strong coupling between compute and network services.  That is, tasks that are done by the network service are executed at the time of VM instantiation, making the compute code dependent on the network service, and vice versa.  This dependency seems undesirable to me as it adds restrictions to implementing 'pluggable' network services, which can vary, with many ways to implement them.

Would anyone be opposed to completely separating out the network service logic from compute?  I don't think it's too difficult to accomplish this, but to do so, it will require that the network service tasks, such as IP allocation, be executed by the user prior to instantiating the VM.

In the new network design(from what I've read up so far), there are concepts of vNICs, and vPorts, where vNICs are network interfaces that are associated with the VMs, and vPorts are logical ports that vNICs are plugged into for network connectivity.  If we are to decouple network and compute services, the steps required for FlatManager networking service would look something like:

1. Create ports for a network.  Each port is associated with an IP address in this particular case, since it's an IP-based network.
2. Create a vNIC
3. Plug a vNIC into an avaiable vPort.  In this case it just means mapping this vNIC to an unused IP address.
4. Start a VM with this vNIC.  vNIC is already mapped to an IP address, so compute does not have to ask the network service to do any IP allocation.

In this simple example, by removing the request for IP allocation from compute, the network service is no longer needed during the VM instantiation.  While it may require more steps for the network setup in more complex cases, it would still hold true that, once the vNIC and vPort are mapped, compute service would not require any network service during the VM instantiation.

IF there is still a need for the compute to access the network service, there is another way.  Currently, the setup of the network environment(bridge, vlan, etc) is all done by the compute service. With the new network model, these tasks should either be separated out into a standalone service('network agent') or at least be separated out into modules with generic APIs that the network plugin providers can implement.  By doing so, and if we can agree on a rule that the compute service must always go through the network agent to access the network service, we can still achieve the separation of compute from network services.   Network agents should have full access to the network service as they are both implemented by the same plugin provider.  Compute would not be aware of the network agent accessing the network service.

With this design, the network service is only tied to the network REST API and the network agent, both of which are implemented by the plugin providers.  This would allow them to implement their network service without worrying about the details of the compute service.

Please let me know if all this made any sense. :-)  Would love to get some feedbacks.

Regards,
Ryu Ishimoto

_______________________________________________
Mailing list: https://launchpad.net/~openstack
Post to     : openstack@xxxxxxxxxxxxxxxxxxx
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

_______________________________________________
Mailing list: https://launchpad.net/~openstack
Post to     : openstack@xxxxxxxxxxxxxxxxxxx
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


--

--
    -Monsyne Dragon
    work:         210-312-4190
    mobile        210-441-0965
    google voice: 210-338-0336



Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace.
Any dissemination, distribution or copying of the enclosed material is prohibited.
If you receive this transmission in error, please notify us immediately by e-mail
at abuse@xxxxxxxxxxxxx, and delete the original message.
Your cooperation is appreciated.




References