openstack team mailing list archive
-
openstack team
-
Mailing list archive
-
Message #00538
Re: Multi Clusters in a Region ...
Hi Sandy,
I think there is some consensus forming around URI names for zones
and objects, so for now I might make sure the zone API rejects names
that are not compliant (ie, no spaces, ...). This is obviously easy
to change later on too as plans firm up.
-Eric
On Thu, Feb 10, 2011 at 01:19:57PM +0000, Sandy Walsh wrote:
> Thanks Eric,
>
> As per our skype conversation yesterday, I agree with your suggestions. Thanks for taking the time!
>
> I've updated the bp to reflect these changes.
>
> I didn't get into the whole URI-for-zone-name issue for now since it's more of a scheduler issue, but I think it's a good idea.
>
> Keep it coming!
> -Sandy
>
>
> _________________________________
>
> _______
> From: Eric Day [eday@xxxxxxxxxxxx]
> Sent: Wednesday, February 09, 2011 1:17 PM
> To: Sandy Walsh
> Cc: openstack@xxxxxxxxxxxxxxxxxxx
> Subject: Re: [Openstack] Multi Clusters in a Region ...
>
> Hi Sandy,
>
> Replying via email since it seems not much discussion is happening
> via the etherpad.
>
> I'm glad to see we're going with REST-API for inter-zone
> communication. This API should be the same thing we use for connecting
> any two clouds together, for example a customers private cloud and a
> public cloud (the private cloud would register resource availability
> for just their account). Keep this use case in mind (multi-tenant
> peering) as you're working through things, as this is probably going
> to be needed sooner than later.
>
> The main concern I have with the current proposal is the hosts
> table. Do we really need this? Instances can belong directly to zones
> and the capabilities can be live data, there is no reason to store
> this in a DB. Various workers (compute, network, volume) can notify
> the scheduler workers their current status either periodically or
> when something changes, and this information can be aggregated and
> pushed to other nodes. We're going to need to do this work eventually
> because we never want the hosts (ie, compute worker) writing directly
> to the DB, which it would need to do now for host stats.
>
> As far as zone naming, I know we want to keep this free-form, but I'm
> starting to lean towards forcing a URI format. As we start integrating
> existing and introducing new services, we need some common locality
> references for objects between systems (more on this in another
> email later). For Nova, this would mean having something like:
>
> Example Zone Graph (assume a 'openstack-compute://' or other
> 'openstack-<service>://' prefix for all URIs):
>
> rackspace.com
> north.rackspace.com
> dc1.north.rackspace.com
> rack1.dc1.north.rackspace.com
> ...
> dc2.north.rackspace.com
> ...
> south.rackspace.com
> dc1.south.rackspace.com
> ...
> dc2.south.rackspace.com
> ...
> private.customer1.com
> private.customer2.com
> ...
>
> This allows us to make requests to the API with requests such as
> "Create an instance somewhere under zone 'north.rackspace.com'" or
> "Create an instance under 'existing_instance.zone' because this is
> where I have another instance already". Deployments can choose to be
> as specific as they want and organize names in any way. All URIs will
> be dynamic and could change at any point (migration from failover,
> re-balance, ...), so they should always be auto-discovered before
> being use via another API call.
>
> We can extend this one step further to also give every object in
> OpenStack a URI as well (it may or may not be resolvable via DNS). For
> example instanceXXXX.rack1.dc1.north.rackspace.com. These would also
> be dynamic since obviously a migration may cause it to move racks
> (or huddle, or whatever we want to call them). This would just be
> instance.name + '.' + current_zone_name.
>
> This type of locality naming gives us *some* structure so we can
> easily perform suffix matches across services to see if we're in
> the same zone without understanding the full hierarchy, as well as
> keeping it in a simple format everyone already understands.
>
> -Eric
>
> On Mon, Feb 07, 2011 at 12:09:26PM +0000, Sandy Walsh wrote:
> > Hi again,
> > I've made some final changes to the Multi-Cluster spec and hope to start
> > coding this week.
> > In a nutshell:
> > I spent the past week messing with RabbitMQ clusters, including WAN
> > clusters between several Rackspace Data Centers. RabbitMQ doesn't really
> > support inter-cluster communication without a nascent piece of technology
> > called Shovel.
> > Because of this and some concerns voiced by others, I've changed the spec
> > to abstract the inter-zone communication approach to support Nova API
> > communications initially, but leave room for AMQP communications down the
> > road.
> > http://wiki.openstack.org/MultiClusterZones now reflects these changes.
> > Specifically: http://wiki.openstack.org/MultiClusterZones#Inter-Zone_Communication_and_Routing
> > (ps> I have more data on the WAN testing I did this weekend and will put
> > it on the wiki later today)
> > Once again, look forward to your feedback
> > here: http://etherpad.openstack.org/multiclusterdiscussion
> > Thanks in advance,
> > Sandy
> >
> > ----------------------------------------------------------------------
> >
> > From: openstack-bounces+sandy.walsh=rackspace.com@xxxxxxxxxxxxxxxxxxx
> > [openstack-bounces+sandy.walsh=rackspace.com@xxxxxxxxxxxxxxxxxxx] on
> > behalf of Sandy Walsh [sandy.walsh@xxxxxxxxxxxxx]
> > Sent: Monday, January 31, 2011 3:26 PM
> > To: openstack@xxxxxxxxxxxxxxxxxxx
> > Subject: [Openstack] Multi Clusters in a Region ...
> > Hi y'all,
> > Now that the Network and API discussions have settled down a little I
> > thought I'd kick up the dust again.
> > I'm slated to work on the Multi Cluster in a Region BP for Cactus. This
> > also touches on Zone/Host Capabilities and Distributed Scheduler, so
> > feedback is important.
> > https://blueprints.launchpad.net/nova/+spec/multi-cluster-in-a-region
> > Here is my first draft at a spec. I'm putting it out there as strawman.
> > Please burn as needed. Links to previous spec/notes are at the top of the
> > spec.
> > http://wiki.openstack.org/MultiClusterZones
> > I will adjust as feedback is gathered.
> > We can discuss this in this thread, or on the Etherpad (I prefer the
> > etherpad since it's linked to the wiki page):
> > http://etherpad.openstack.org/multiclusterdiscussion
> > Thanks in advance,
> > Sandy
> >
> > Confidentiality Notice: This e-mail message (including any attached or
> > embedded documents) is intended for the exclusive and confidential use of the
> > individual or entity to which this message is addressed, and unless otherwise
> > expressly indicated, is confidential and privileged information of Rackspace.
> > Any dissemination, distribution or copying of the enclosed material is prohibited.
> > If you receive this transmission in error, please notify us immediately by e-mail
> > at abuse@xxxxxxxxxxxxx, and delete the original message.
> > Your cooperation is appreciated.
> >
> > Confidentiality Notice: This e-mail message (including any attached or
> > embedded documents) is intended for the exclusive and confidential use of the
> > individual or entity to which this message is addressed, and unless otherwise
> > expressly indicated, is confidential and privileged information of Rackspace.
> > Any dissemination, distribution or copying of the enclosed material is prohibited.
> > If you receive this transmission in error, please notify us immediately by e-mail
> > at abuse@xxxxxxxxxxxxx, and delete the original message.
> > Your cooperation is appreciated.
>
> > _______________________________________________
> > Mailing list: https://launchpad.net/~openstack
> > Post to : openstack@xxxxxxxxxxxxxxxxxxx
> > Unsubscribe : https://launchpad.net/~openstack
> > More help : https://help.launchpad.net/ListHelp
>
>
>
> Confidentiality Notice: This e-mail message (including any attached or
> embedded documents) is intended for the exclusive and confidential use of the
> individual or entity to which this message is addressed, and unless otherwise
> expressly indicated, is confidential and privileged information of Rackspace.
> Any dissemination, distribution or copying of the enclosed material is prohibited.
> If you receive this transmission in error, please notify us immediately by e-mail
> at abuse@xxxxxxxxxxxxx, and delete the original message.
> Your cooperation is appreciated.
Follow ups
References