openstack team mailing list archive
-
openstack team
-
Mailing list archive
-
Message #00868
Re: Network Service for L2/L3 Network Infrastructure blueprint
On Mon, Feb 21, 2011 at 11:06 AM, Salvatore Orlando <
Salvatore.Orlando@xxxxxxxxxxxxx> wrote:
> Dan,
>
>
>
> I agree with you on the notion of a ‘network’. That is exactly the same
> concept I was trying to expressing in my previous message.
>
>
>
> I also agree that each plugin might respond in a different way to the
> ‘create_network’ call.
>
> However, I believe there is a rather remarkable difference between creating
> a collection of ports with shared connectivity, and creating a network with
> a NAT gateway, a VPN access gateway, a DHCP server, and possibly a DNS
> server, as it is in the current vlan manager J ! That is to say that the
> result of an API call, create_network in this case, should always be the
> same regardless of the plugin used. Distinct plugins might use different
> technologies, but the final results for the API user should always be the
> same.
>
>
>
> For instance, in the nova vlan manager scenario, the create_network
> operation, part of the core API, will set up basic connectivity (i.e.: vlan
> interfaces and bridges on hypervisors), whereas another bunch of operations,
> part of the extension API, would deal with setting up the IP range, the DHCP
> server, NAT configuration, VPN access and so on.
>
Hi Salvatore,
I think the is exactly the discussion we should be having :) I should have
been more clear in my last email to make a distinction between what should
be "possible" and what I think is the "right" way to expose the existing
nova functionality. I actually think we're basically on the same page.
In my view, VPN, DHCP, DNS, etc are "network-related" functionality, but do
not belong as part of the core network API. These network-related services
might be configured in one of several ways:
1) network-related service can be managed by the cloud tenant or provider
via an Openstack API or API extension. In this case, I feel that the
cleanest way to model this would be to have a separate service (e.g., a DHCP
service) that itself can attach to a network, just like a vNIC from the
compute service can attach itself to a particular network. (Note: when I
say "separate service", I mean that this service is cleanly decoupled from
the core network service with respect to enabling/disabling the
functionality. I am not making a statement about what repository the code
should live in).
2) a network-related service like DHCP may not be closely integrated with
Openstack, for example, being completely managed by the cloud provider in a
traditional manner. In this case, the cloud provider is responsible for
making sure the VM connectivity from the "create_network" call provides
access to a physical data center network that hosts this service. Its
pretty common for VMs to have an interface that maps to a physical
"services" network within the data center that hosts things like DNS,
storage, etc.
3) a network service plugin could implement this functionality as part of
its definition of a "create_network".
With respect to "porting" the existing nova functionality to the network
service for common consumption, my feeling is that #1 would be the best
model, which I believe is what you are proposing as well. Still, I think
something like #3 should be "possible" within the network service mechanism
that we agree on. In fact, #3 and #2 are really very similar, as they are
both cases of the cloud provider deploying systems to provide
network-related functionality without exposing it via the openstack API.
Dan
>
>
> Salvatore
>
>
>
>
>
> *From:* Dan Wendlandt [mailto:dan@xxxxxxxxxx]
> *Sent:* 21 February 2011 18:11
> *To:* Salvatore Orlando
> *Cc:* Erik Carlin; Ishimoto, Ryu; openstack@xxxxxxxxxxxxxxxxxxx
>
> *Subject:* Re: [Openstack] Network Service for L2/L3 Network
> Infrastructure blueprint
>
>
>
>
>
> 2011/2/21 Salvatore Orlando <Salvatore.Orlando@xxxxxxxxxxxxx>
>
>
>
> Note that in create_network there is no assumption on the range of floating
> IPs. IMHO, The above listed APIs are good enough to provide a generic
> virtual L2-network which can then be assigned exclusively to a project, thus
> making it an isolated L2 network.
>
> I also believe these operations fulfil all the criteria listed by Romain
> for having them in the generic API, whereas create_network(address_prefix)
> clearly violates criterion #4 as we cannot assume every network plugin will
> provide IPv4 NAT.
>
>
>
> Hi folks. While I agree that the current Nova model of a "network"
> requires IPv4 NAT, I feel that the concept is actually quite a bit more
> general and I think the API should reflect this :)
>
>
>
> At the bexar summit we advocated for an abstract notion of a network that
> was a "collection of ports with shared connectivity". The key part of this
> definition is that it does NOT mandate a particular service model (L2/L3,
> L3-only) or set of available services (e.g., NAT, FW, VPN). Any plugin
> would be free to provide a particular service model, or create networks that
> offer built-in services. For example, a plugin that maps to the existing
> Nova "private" functionality might respond to a "create_network" call by
> assigning a L2 VLAN to a customer and also provisioning a NAT gateway on
> that VLAN. Another plugin might do something entirely different, not using
> NAT at all.
>
>
>
> Romain raised a good question of whether "create_network" makes sense in
> the "core" API. I would argue that having a notion of a "network" that
> joins "ports" will be most intuitive to how people think about networking,
> and that having an intuitive API is a plus. Every virtualization platform
> that I can think of has some notion of a "network" that joins "ports"
> (XenServer Networks, vmware port groups, cisco 1000v port profiles, hyper-v
> virtual networks), not to mention the clear analogy to the physical world of
> plugging physical wires into ports on the same switch or set of
> interconnected switches.
>
>
>
> Regarding the example when having a notion of a "network" in the API might
> be clunky because it requires one "network" per "port". First, requiring
> two API calls instead of one for a particular scenario of using a API
> doesn't strike me as terrible, since APIs are meant to be automated. More
> importantly though, I don't think the one network per port would actually be
> required for this scenario. I would have modeled this as a single "network"
> per-tenant that provides the "service" of L2 VPN connectivity access back to
> that tenant's environment. I think the reason for our disconnect may be
> that I see the API only as a way of describing the connectivity that the VMs
> sees (all VMs get L2 VPN access to this tenant's remote network), not
> necessarily describing the underlying implementation of that connectivity
> (each VM has its own bridge with the VM interface and the openvpn tunnel
> interface). This is why in the network service write-up we presented at
> bexar we referred to the API networks as "logical networks", as they only
> describe what connectivity a VM should get, not necessarily the
> implementation of that connectivity (which will likely vary per plugin).
>
>
>
> Dan
>
>
>
>
>
>
>
> Then, assuming that the above listed APIs are sufficient to setup a Layer-2
> virtual network, what should we do about IP addressing? (By IP addressing I
> just mean a strategy for associating an IP subnet of private addresses to
> the network). Do we want to use the create_network API call, adding a
> parameter for the CIDR, or do we want to define a different API, for
> instance create_subnet(network_id,cidr)?
>
>
>
> Regards,
>
> Salvatore
>
>
>
> *From:* openstack-bounces+salvatore.orlando=eu.citrix.com@
> lists.launchpad.net [mailto:openstack-bounces+salvatore.orlando=
> eu.citrix.com@xxxxxxxxxxxxxxxxxxx] *On Behalf Of *Erik Carlin
> *Sent:* 21 February 2011 14:57
> *To:* Ishimoto, Ryu
>
>
> *Cc:* openstack@xxxxxxxxxxxxxxxxxxx
> *Subject:* Re: [Openstack] Network Service for L2/L3 Network
> Infrastructure blueprint
>
>
>
> From a Rackspace perspective, we definitely want the network service to
> support a L2 service model. I think VLANs should be the default out of the
> box option but other more service provider scale L2 alternatives should also
> be supported. Let's just decide that L2 is in.
>
>
>
> Other thoughts?
>
>
>
> Erik
>
> Sent from my iPhone
>
>
> On Feb 21, 2011, at 6:06 AM, "Ishimoto, Ryu" <ryu@xxxxxxxxxxx> wrote:
>
>
>
> It seems that there must be a decision made to what extent should an
> OpenStack 'network service' is expected to support. Should it support
> L2-only networks? Or should it assume ONLY the IP-based network services to
> be available? Looking at http://wiki.openstack.org/NetworkService, there
> doesn't seem to be any demand for L2 networks, but should it assume there
> won't be any in the future?
>
>
>
> As Romain and Hisaharu-san suggested, if OpenStack is to support L2
> networks, then it would make sense to simply have the Virtual NICs(network
> interfaces associated with VMs) and their mapping to Virtual Ports(logical
> ports to plug VNICs into for network connectivity) be generic. If OpenStack
> is going to support ONLY IP-based networks, and assumes that every VNIC is
> assigned an IP, then the generic API should be defined accordingly. Either
> way, it would be very helpful to decide on this soon.
>
>
>
> What do you think?
>
>
>
> Thanks,
>
> Ryu
>
>
>
>
>
> 2011/2/17 石井 久治 <ishii.hisaharu@xxxxxxxxxxxxx>
>
> Romain,
>
> Please call me hisaharu :)
>
> I have read your proposal and felt the below point is important.
>
> > - create_network(address_prefix): network_id
>
> > This cannot be part of the API because it violates criterion 4):
> a
> > network service plugin may not support NATted IPv4 networks, for instance
> a
> > plugin that supports only pure L2 private Ethernet networks to
> interconnect
> > VMs.
>
> On the other hand this was said in the document Erik referred.
> > Implementors are only required to implement the core API.
>
> Should we allow a network plugin which has no functions to allocate IP
> addresses to VMs but provides L2 reachability to users?
> If so, the core API should have no APIs related to IP address.
>
> Thanks,
> Hisaharu Ishii
>
>
>
> (2011/02/16 10:04), Romain Lenglet wrote:
> > Ishii-san,
> >
> > Re-reading the proposal you sent on Feb. 2nd, I realized that you didn't
> > separate between generic operations and plugin-specific operations. I
> > thought you did, sorry. I propose to rewrite your spec into one API and
> two
> > (optional) extensions:
> >
> > - network service API (plugin-agnostic):
> > - list_vnics(): [vnic_id]
> > Return the list of vnic_id created by the tenant (project),
> where
> > vnic_id is the ID of a VNIC.
> > - destroy_vnic(vnic_id)
> > Remove a VNIC from its VM, given its ID, and destroy it.
> > - plug(vnic_id, port_id)
> > Plug the VNIC with ID vnic_id into the port with ID port_id,
> both
> > managed by this network service.
> > - unplug(vnic_id)
> > Unplug the VNIC from its port, previously plugged by calling
> plug().
> > - list_ports(): [port_id]
> > Return the list of IDs of ports created by the tenant (project).
> > - destroy_port(port_id)
> > Destroy port with ID port_id.
> >
> > - Ethernet VNIC API extension:
> > - create_vnic([mac_address]): vnid_id
> > Create a VNIC and return the ID of the created VNIC. Associate
> the
> > given MAC address with the VNIC, or associate a random unique MAC with it
> if
> > not given.
> > This cannot be part of the API because it violates criterion 4):
> we
> > plan to implement non-Ethernet virtual devices to connect VMs, so this
> > operation cannot be implemented in that specific plugin.
> >
> > - NATed IPv4 network API extension:
> > - create_network(address_prefix): network_id
> > Create a new logical network with floating addresses in the
> given
> > address range, and return the network ID.
> > This cannot be part of the API because it violates criterion 4):
> a
> > network service plugin may not support NATted IPv4 networks, for instance
> a
> > plugin that supports only pure L2 private Ethernet networks to
> interconnect
> > VMs. Moreover, the notion of "logical network" doesn't make sense in all
> > cases: one can imagine a network plugin where every port is implemented
> by a
> > separate Layer-2 OpenVPN connection to a tenant's private physical
> Ethernet
> > network, in which case there is no notion of "logical network" (or it
> would
> > require users to create a separate logical network for every port / VNIC,
> > which would be very inconvenient).
> > - list_networks(): [network_id]
> > Return the list of IDs of logical networks created by the tenant
> > (project).
> > This cannot be part of the API because it violates criterion 4):
> > idem, the notion of "logical network" is not plugin-agnostic.
> > - destroy_network(network_id)
> > Destroy the logical network with ID network_id.
> > This cannot be part of the API because it violates criterion 4):
> > idem, the notion of "logical network" is not plugin-agnostic.
> > - create_port(network_id): port_id
> > Create a port in the logical network with ID network_id,
> associate a
> > floating address to it, and return the port's ID.
> > This cannot be part of the API because it violates criterion 4):
> > idem, the notion of "logical network" is not plugin-agnostic.
> >
> > What do you think of that new version of the API spec?
> > Do you agree with the split into API+extensions?
> >
> > Regards,
> > --
> > Romain Lenglet
> >
> > 2011/2/16 Romain Lenglet<romain@xxxxxxxxxxx>
> >
> >> Hi Erik,
> >>
> >> Thanks for your comments.
> >>
> >> There doesn't seem to be a consensus to use "core API + extensions" vs.
> >> multiple APIs?
> >> Anyway, I don't see any issues with specifying a "core API" for network
> >> services, and a "core API" for network agents, corresponding exactly to
> >> NTT's Ishii-san's "generic APIs", and specifying all the non-generic,
> >> plugin-specific operations in extensions.
> >> If the norm becomes to have a core API + extensions, then the network
> >> service spec will be modified to follow that norm. No problem.
> >>
> >> The important point we need to agree on is what goes into the API, and
> what
> >> goes into extensions.
> >>
> >> Let me rephrase the criteria that I proposed, using the "API" and
> >> "extensions" terms:
> >> 1) any operation called by the compute service (Nova) directly MUST be
> >> specified in the API;
> >> 2) any operation called by users / admin tools MAY be specified in the
> API,
> >> but not necessarily;
> >> 3) any operation specified in the API MUST be independent from details
> of
> >> specific network service plugins (e.g. specific network models, specific
> >> supported protocols, etc.), i.e. that operation can be supported by
> every
> >> network service plugin imaginable, which means that:
> >> 4) any operation that cannot be implemented by all plugins MUST be
> >> specified in an extension, i.e. if one comes up with a counter-example
> >> plugin that cannot implement that operation, then the operation cannot
> be
> >> specified in the API and MUST be specified in an extension.
> >>
> >> Do we agree on those criteria?
> >>
> >> I think Ishii-san's proposal meets those criteria.
> >> Do you see any issues with Ishii-san's proposal regarding the split
> between
> >> core operations and extension operations?
> >> If you think that some operations that are currently defined as
> extensions
> >> in Ishii-san's proposal should be in the API, I'll be happy to try to
> give
> >> counter-examples of network service plugins that can't implement them.
> :)
> >>
> >> Regards,
> >> --
> >> Romain Lenglet
> >>
> >>
> >> 2011/2/16 Erik Carlin<erik.carlin@xxxxxxxxxxxxx>
> >>
> >> My understanding is that we want a single, canonical OS network
> service
> >>> API. That API can then be implemented by different "service engines"
> on
> >>> that back end via a plug-in/driver model. The way additional features
> are
> >>> added to the canonical API that may not be core or for widespread
> adoption
> >>> (e.g. something vendor specific) is via extensions. You can take a
> look at
>
> >>> the proposed OS compute API spec<
> http://wiki.openstack.org/OpenStackAPI_1-1>to see how extensions are
> implemented there. Also, Jorge Williams has done
> >>> a good write up of the concept here<
> http://wiki.openstack.org/JorgeWilliams?action=AttachFile&do=view&target=Extensions.pdf
> >
>
> >>> .
> >>>
> >>> Erik
> >>>
> >>> From: Romain Lenglet<romain@xxxxxxxxxxx>
> >>> Date: Tue, 15 Feb 2011 17:03:57 +0900
> >>> To: 石井 久治<ishii.hisaharu@xxxxxxxxxxxxx>
> >>> Cc:<openstack@xxxxxxxxxxxxxxxxxxx>
> >>>
> >>> Subject: Re: [Openstack] Network Service for L2/L3 Network
> Infrastructure
> >>> blueprint
> >>>
> >>> Hi Ishii-san,
> >>>
> >>> On Tuesday, February 15, 2011 at 16:28, 石井 久治 wrote:
> >>>
> >>> Hello Hiroshi-san
> >>>
> >>>>> Do you mean that the former API is an interface that is
> >>>>> defined in OpenStack project, and the latter API is
> >>>>> a vendor specific API?
> >>>> My understanding is that yes, that's what he means.
> >>>
> >>> I also think so.
> >>>
> >>> In addition, I feel it is issue that what network functions should be
> >>> defined as generic API, and what network functions should be defined as
> >>> plugin specific API.
> >>> How do you think ?
> >>>
> >>> I propose to apply the following criteria to determine which operations
> >>> belong to the generic API:
> >>> - any operation called by the compute service (Nova) directly MUST
> belong
> >>> to the generic API;
> >>> - any operation called by users (REST API, etc.) MAY belong to the
> generic
> >>> API;
> >>> - any operation belonging to the generic API MUST be independent from
> >>> details of specific network service plugins (e.g. specific network
> models,
> >>> specific supported protocols, etc.), i.e. the operation can be
> supported by
> >>> every network service plugin imaginable, which means that if one can
> come up
> >>> with a counter-example plugin that cannot implement that operation,
> then the
> >>> operation cannot belong to the generic API.
> >>>
> >>> How about that?
> >>>
> >>> Regards,
> >>> --
> >>> Romain Lenglet
> >>>
> >>>
> >>>
> >>> Thanks
> >>> Hisaharu Ishii
> >>>
> >>>
> >>> (2011/02/15 16:18), Romain Lenglet wrote:
> >>>
> >>> Hi Hiroshi,
> >>> On Tuesday, February 15, 2011 at 15:47, Hiroshi DEMPO wrote:
> >>> Hello Hisaharu san
> >>>
> >>>
> >>> I am not sure about the differences between generic network API and
> >>> plugin X specific network service API.
> >>>
> >>> Do you mean that the former API is an interface that is
> >>> defined in OpenStack project, and the latter API is
> >>> a vendor specific API?
> >>>
> >>>
> >>> My understanding is that yes, that's what he means.
> >>>
> >>> --
> >>> Romain Lenglet
> >>>
> >>>
> >>>
> >>> Thanks
> >>> Hiroshi
> >>>
> >>> -----Original Message-----
> >>> From: openstack-bounces+dem=ah.jp.nec.com@xxxxxxxxxxxxxxxxxxx
>
> >>> [mailto:openstack-bounces<openstack-bounces>+dem=
> >>> ah.jp.nec.com@xxxxxxxxxxxxxxxxxx
> >>> t] On Behalf Of 石井 久治
> >>> Sent: Thursday, February 10, 2011 8:48 PM
> >>> To: openstack@xxxxxxxxxxxxxxxxxxx
> >>> Subject: Re: [Openstack] Network Service for L2/L3 Network
> >>> Infrastructure blueprint
> >>>
> >>> Hi, all
> >>>
> >>> As we have said before, we have started designing and writing
> >>> POC codes of network service.
> >>>
> >>> - I know that there were several documents on the new network
> >>> service issue that were locally exchanged so far.
> >>> Why not collecting them into one place and share them
> >>>
> >>> publicly?
> >>>
> >>> Based on these documents, I created an image of
> >>> implementation (attached). And I propose the following set of
> >>> methods as the generic network service APIs.
> >>> - create_vnic(): vnic_id
> >>> Create a VNIC and return the ID of the created VNIC.
> >>> - list__vnics(vm_id): [vnic_id]
> >>> Return the list of vnic_id, where vnic_id is the ID of a VNIC.
> >>> - destroy_vnic(vnic_id)
> >>> Remove a VNIC from its VM, given its ID, and destroy it.
> >>> - plug(vnic_id, port_id)
> >>> Plug the VNIC with ID vnic_id into the port with ID
> >>> port_id managed by this network service.
> >>> - unplug(vnic_id)
> >>> Unplug the VNIC from its port, previously plugged by
> >>> calling plug().
> >>> - create_network(): network_id
> >>> Create a new logical network.
> >>> - list_networks(project_id): [network_id]
> >>> Return the list of logical networks available for
> >>> project with ID project_id.
> >>> - destroy_network(network_id)
> >>> Destroy the logical network with ID network_id.
> >>> - create_port(network_id): port_id
> >>> Create a port in the logical network with ID
> >>> network_id, and return the port's ID.
> >>> - list_ports(network_id): [port_id]
> >>> Return the list of IDs of ports in a network given its ID.
> >>> - destroy_port(port_id)
> >>> Destroy port with ID port_id.
> >>>
> >>> This design is a first draft.
> >>> So we would appreciate it if you would give us some comments.
> >>>
> >>> In parallel with it, we are writing POC codes and uploading
> >>> it to "lp:~ntt-pf-lab/nova/network-service".
> >>>
> >>> Thanks,
> >>> Hisaharu Ishii
> >>>
> >>>
> >>> (2011/02/02 19:02), Koji IIDA wrote:
> >>>
> >>> Hi, all
> >>>
> >>>
> >>> We, NTT PF Lab., also agree to discuss about network service at the
> >>> Diablo DS.
> >>>
> >>> However, we would really like to include network service in
> >>>
> >>> the Diablo
> >>>
> >>> release because our customers strongly demand this feature. And we
> >>> think that it is quite important to merge new network
> >>>
> >>> service to trunk
> >>>
> >>> soon after Diablo DS so that every developer can contribute their
> >>> effort based on the new code.
> >>>
> >>> We are planning to provide source code for network service
> >>>
> >>> in a couple
> >>>
> >>> of weeks. We would appreciate it if you would review it
> >>>
> >>> and give us
> >>>
> >>> some feedback before the next design summit.
> >>>
> >>> Ewan, thanks for your making new entry at wiki page (*1).
> >>>
> >>> We will also
> >>>
> >>> post our comments soon.
> >>>
> >>> (*1) http://wiki.openstack.org/NetworkService
> >>>
> >>>
> >>> Thanks,
> >>> Koji Iida
> >>>
> >>>
> >>> (2011/01/31 21:19), Ewan Mellor wrote:
> >>>
> >>> I will collect the documents together as you suggest, and
> >>>
> >>> I agree that we need to get the requirements laid out again.
> >>>
> >>>
> >>> Please subscribe to the blueprint on Launchpad -- that way
> >>>
> >>> you will be notified of updates.
> >>>
> >>>
> >>> https://blueprints.launchpad.net/nova/+spec/bexar-network-service
> >>>
> >>> Thanks,
> >>>
> >>> Ewan.
> >>>
> >>> -----Original Message-----
> >>> From: openstack-bounces+ewan.mellor=citrix.com@xxxxxxxxxxxxxxxxxxx
> >>>
>
> >>> [mailto:openstack-bounces<openstack-bounces>+ewan.mellor=
> >>> citrix.com@xxxxxxxxxxxxxxxxxxx
> >>>
> >>> ]
> >>> On Behalf Of Masanori ITOH
> >>> Sent: 31 January 2011 10:31
> >>> To: openstack@xxxxxxxxxxxxxxxxxxx
> >>> Subject: Re: [Openstack] Network Service for L2/L3 Network
> >>> Infrastructure blueprint
> >>>
> >>> Hello,
> >>>
> >>> We, NTT DATA, also agree with majority of folks.
> >>> It's realistic shooting for the the Diablo time frame to have the
> >>> new network service.
> >>>
> >>> Here are my suggestions:
> >>>
> >>> - I know that there were several documents on the new network
> >>> service issue
> >>> that were locally exchanged so far.
> >>> Why not collecting them into one place and share them
> >>>
> >>> publicly?
> >>>
> >>>
> >>> - I know that the discussion went into a bit
> >>>
> >>> implementation details.
> >>>
> >>> But now, what about starting the discussion from the
> >>>
> >>> higher level
> >>>
> >>> design things (again)? Especially, from the
> >>>
> >>> requirements level.
> >>>
> >>>
> >>> Any thoughts?
> >>>
> >>> Masanori
> >>>
> >>>
> >>> From: John Purrier<john@xxxxxxxxxxxxx>
> >>> Subject: Re: [Openstack] Network Service for L2/L3 Network
> >>> Infrastructure blueprint
> >>> Date: Sat, 29 Jan 2011 06:06:26 +0900
> >>>
> >>> You are correct, the networking service will be more
> >>>
> >>> complex than
> >>>
> >>> the
> >>>
> >>> volume
> >>>
> >>> service. The existing blueprint is pretty comprehensive,
> >>>
> >>> not only
> >>>
> >>> encompassing the functionality that exists in today's network
> >>> service
> >>>
> >>> in
> >>>
> >>> Nova, but also forward looking functionality around flexible
> >>> networking/openvswitch and layer 2 network bridging
> >>>
> >>> between cloud
> >>>
> >>> deployments.
> >>>
> >>> This will be a longer term project and will serve as the bedrock
> >>> for
> >>>
> >>> many
> >>>
> >>> future OpenStack capabilities.
> >>>
> >>> John
> >>>
> >>> -----Original Message-----
> >>> From: openstack-bounces+john=openstack.org@xxxxxxxxxxxxxxxxxxx
> >>>
>
> >>> [mailto:openstack-bounces<openstack-bounces>+john=
> >>> openstack.org@xxxxxxxxxxxxxxxxxxx]
> >>>
> >>> On
> >>>
> >>> Behalf
> >>>
> >>> Of Thierry Carrez
> >>> Sent: Friday, January 28, 2011 1:52 PM
> >>> To: openstack@xxxxxxxxxxxxxxxxxxx
> >>> Subject: Re: [Openstack] Network Service for L2/L3 Network
> >>>
> >>> Infrastructure
> >>>
> >>> blueprint
> >>>
> >>> John Purrier wrote:
> >>>
> >>> Here is the suggestion. It is clear from the response
> >>>
> >>> on the list
> >>>
> >>> that
> >>>
> >>> refactoring Nova in the Cactus timeframe will be too risky,
> >>>
> >>> particularly as
> >>>
> >>> we are focusing Cactus on Stability, Reliability, and
> >>>
> >>> Deployability
> >>>
> >>> (along
> >>>
> >>> with a complete OpenStack API). For Cactus we should leave the
> >>>
> >>> network and
> >>>
> >>> volume services alone in Nova to minimize destabilizing the code
> >>>
> >>> base. In
> >>>
> >>> parallel, we can initiate the Network and Volume Service
> >>>
> >>> projects
> >>>
> >>> in Launchpad and allow the teams that form around these
> >>>
> >>> efforts to
> >>>
> >>> move
> >>>
> >>> in
> >>>
> >>> parallel, perhaps seeding their projects from the
> >>>
> >>> existing Nova code.
> >>>
> >>>
> >>> Once we complete Cactus we can have discussions at the Diablo DS
> >>>
> >>> about
> >>>
> >>> progress these efforts have made and how best to move
> >>>
> >>> forward with
> >>>
> >>> Nova
> >>>
> >>> integration and determine release targets.
> >>>
> >>> I agree that there is value in starting the proof-of-concept work
> >>>
> >>> around
> >>>
> >>> the network services, without sacrificing too many developers to
> >>> it,
> >>>
> >>> so
> >>>
> >>> that a good plan can be presented and discussed at the
> >>>
> >>> Diablo Summit.
> >>>
> >>>
> >>> If volume sounds relatively simple to me, network sounds
> >>>
> >>> significantly
> >>>
> >>> more complex (just looking at the code ,network manager code is
> >>> currently used both by nova-compute and nova-network to
> >>>
> >>> modify the
> >>>
> >>> local
> >>>
> >>> networking stack, so it's more than just handing out IP
> >>>
> >>> addresses
> >>>
> >>> through an API).
> >>>
> >>> Cheers,
> >>>
> >>> --
> >>> Thierry Carrez (ttx)
> >>> Release Manager, OpenStack
> >>>
> >>> _______________________________________________
> >>> Mailing list: https://launchpad.net/~openstack
> >>> Post to : openstack@xxxxxxxxxxxxxxxxxxx
> >>> Unsubscribe : https://launchpad.net/~openstack
> >>> More help : https://help.launchpad.net/ListHelp
> >>>
> >>>
> >>> _______________________________________________
> >>> Mailing list: https://launchpad.net/~openstack
> >>> Post to : openstack@xxxxxxxxxxxxxxxxxxx
> >>> Unsubscribe : https://launchpad.net/~openstack
> >>> More help : https://help.launchpad.net/ListHelp
> >>>
> >>>
> >>> _______________________________________________
> >>> Mailing list: https://launchpad.net/~openstack
> >>> Post to : openstack@xxxxxxxxxxxxxxxxxxx
> >>> Unsubscribe : https://launchpad.net/~openstack
> >>> More help : https://help.launchpad.net/ListHelp
> >>>
> >>>
> >>> _______________________________________________
> >>> Mailing list: https://launchpad.net/~openstack
> >>> Post to : openstack@xxxxxxxxxxxxxxxxxxx
> >>> Unsubscribe : https://launchpad.net/~openstack
> >>> More help : https://help.launchpad.net/ListHelp
> >>>
> >>> _______________________________________________
> >>> Mailing list: https://launchpad.net/~openstack
> >>> Post to : openstack@xxxxxxxxxxxxxxxxxxx
> >>> Unsubscribe : https://launchpad.net/~openstack
> >>> More help : https://help.launchpad.net/ListHelp
> >>>
> >>> Attachments:
> >>> - smime.p7s
> >>>
> >>>
> >>> _______________________________________________
> >>> Mailing list: https://launchpad.net/~openstack
> >>> Post to : openstack@xxxxxxxxxxxxxxxxxxx
> >>> Unsubscribe : https://launchpad.net/~openstack
> >>> More help : https://help.launchpad.net/ListHelp
> >>>
> >>>
> >>> _______________________________________________ Mailing list:
> >>> https://launchpad.net/~openstack Post to :
> openstack@lists.launchpad.netUnsubscribe :
> >>> https://launchpad.net/~openstack More help :
> >>> https://help.launchpad.net/ListHelp
> >>>
> >>> Confidentiality Notice: This e-mail message (including any attached or
> >>> embedded documents) is intended for the exclusive and confidential use
> of the
> >>> individual or entity to which this message is addressed, and unless
> otherwise
> >>> expressly indicated, is confidential and privileged information of
> Rackspace.
> >>> Any dissemination, distribution or copying of the enclosed material is
> prohibited.
> >>> If you receive this transmission in error, please notify us immediately
> by e-mail
> >>> at abuse@xxxxxxxxxxxxx, and delete the original message.
> >>> Your cooperation is appreciated.
> >>>
> >>>
> >>
> >
>
> _______________________________________________
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@xxxxxxxxxxxxxxxxxxx
> Unsubscribe : https://launchpad.net/~openstack
> More help : https://help.launchpad.net/ListHelp
>
>
>
> _______________________________________________
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@xxxxxxxxxxxxxxxxxxx
> Unsubscribe : https://launchpad.net/~openstack
> More help : https://help.launchpad.net/ListHelp
>
>
>
> Confidentiality Notice: This e-mail message (including any attached or
>
> embedded documents) is intended for the exclusive and confidential use of the
>
> individual or entity to which this message is addressed, and unless otherwise
>
> expressly indicated, is confidential and privileged information of Rackspace.
>
> Any dissemination, distribution or copying of the enclosed material is prohibited.
>
> If you receive this transmission in error, please notify us immediately by e-mail
>
> at abuse@xxxxxxxxxxxxx, and delete the original message.
>
> Your cooperation is appreciated.
>
>
> _______________________________________________
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@xxxxxxxxxxxxxxxxxxx
> Unsubscribe : https://launchpad.net/~openstack
> More help : https://help.launchpad.net/ListHelp
>
>
>
>
> --
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~
> Dan Wendlandt
> Nicira Networks, Inc.
> www.nicira.com | www.openvswitch.org
> Sr. Product Manager
> cell: 650-906-2650
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~
>
--
~~~~~~~~~~~~~~~~~~~~~~~~~~~
Dan Wendlandt
Nicira Networks, Inc.
www.nicira.com | www.openvswitch.org
Sr. Product Manager
cell: 650-906-2650
~~~~~~~~~~~~~~~~~~~~~~~~~~~
Follow ups
References
-
Re: Network Service for L2/L3 Network Infrastructure blueprint
From: Romain Lenglet, 2011-02-15
-
Re: Network Service for L2/L3 Network Infrastructure blueprint
From: Erik Carlin, 2011-02-15
-
Re: Network Service for L2/L3 Network Infrastructure blueprint
From: Romain Lenglet, 2011-02-16
-
Re: Network Service for L2/L3 Network Infrastructure blueprint
From: Romain Lenglet, 2011-02-16
-
Re: Network Service for L2/L3 Network Infrastructure blueprint
From: 石井 久治, 2011-02-17
-
Re: Network Service for L2/L3 Network Infrastructure blueprint
From: Ishimoto, Ryu, 2011-02-21
-
Re: Network Service for L2/L3 Network Infrastructure blueprint
From: Erik Carlin, 2011-02-21
-
Re: Network Service for L2/L3 Network Infrastructure blueprint
From: Salvatore Orlando, 2011-02-21
-
Re: Network Service for L2/L3 Network Infrastructure blueprint
From: Dan Wendlandt, 2011-02-21
-
Re: Network Service for L2/L3 Network Infrastructure blueprint
From: Salvatore Orlando, 2011-02-21