← Back to team overview

openstack team mailing list archive

Re: Decoupling of Network and Compute services for the new Network Service design


I see, the latency of setting up bridges and vlans could be a problem.

How about the second problem, that of not having enough information to
assign the IP.  Is it really necessary to know what physical node the
VM will run on before assigning the IP?  Shouldn't that be decoupled?
For example, if this eventually supports VM migration, then wouldn't
the physical host be irrelevant in that case?  Probably I'm not
understanding the precise use case.


On Thu, Feb 24, 2011 at 18:21, Vishvananda Ishaya <vishvananda@xxxxxxxxx> wrote:
> It could be relatively quick, but if the underlying architechture needs to setup bridges and vlans, I can see this taking a second or more.  I like an api that returns in the hundreds of ms.  A greater concern of #1 is that there isn't always enough information to assign the ip at the compute/api.py layer.  Often this decision only makes sense once we know which host the vm will run on.  It therefore really needs to be in the scheduler or later to have the most flexibility.
> Vish
> On Feb 24, 2011, at 12:16 AM, Dan Mihai Dumitriu wrote:
>> Hi Vish,
>>> We need some sort of supervisor to tell the network to allocate the network before dispatching a message to compute.  I see three possibilities (from easiest to hardest):
>>> 1. Make the call in /nova/compute/api.py (this code runs on the api host)
>>> 2. Make the call in the scheduler (the scheduler then becomes sort of a supervisor to make sure all setup occurs for a vm to launch)
>>> 3. Create a separate compute supervisor that is responsible for managing the calls to different components
>>> The easiest seems to be 1, but unfortunately it forces us to wait for the network allocation to finish before returning to the user which i dislike.
>> What is the problem with waiting for the network allocation?  I would
>> imagine that will be a quick operation.  (though it would depend on
>> the plugin implementation)
>> Cheers,
>> Dan