← Back to team overview

openstack team mailing list archive

Re: Multi-Cluster/Zone - Devil in the Details ...

 

Hi Sandy,

On Wed, Feb 16, 2011 at 06:19:52PM +0000, Sandy Walsh wrote:
>      Hmm.....  You wouldn't really need to re-marshall the request. Just
>      copy  the needed headers & url, and pass along the body as you received
>      it.  Basically you are just
>      acting as a sort of http proxy.
> 
>    Yup, that's what I was proposing. I guess I wasn't clear.

Agreed, we should never assume HTTP request outside of
nova/api. Anything could be consuming the internal nova/compute,
nova/volume, ... API, so you need to proxy using only the information
given via those API calls. Likewise, down the road, the zones could
possibly be connecting in some other way and it may not just be HTTP,
so a given scheduler may be written to send via some other queue
system. Of course just use the simplest, sane defaults like building
a HTTP request object for now in the scheduler and proxy it using
the configured zone URIs.

>    Yes. The proxying will have to occur in the scheduler where the decision
>    making is done, correct?

Yes. It's more difficult too, and relates to the other issues. Right
now the compute API writes a DB record and simply passes an ID to
the scheduler that's only local to the zone. The prerequisite work
I was doing was working towards changing this so instead the entire
request (in the form of a python dict) would be passed around until
it reached the final zone. We still need to do this before going much
further, otherwise we'll have a mess of unused instance records in
the top-level zone.

>      Basically there needs to be some notification (pub sub, rest call,
>      whatever) that gets passed back up the chain to the 'higher'
>      schedulers.  They use this to replicate basic info on the  higher zones
>      (possibly as a cache).  This could also drive an event feed to the end
>      user. 
>      The alternative is to pull from zones and cache that.  But the
>      notification approach seems  more efficient.
> 
>    I like the notification scheme as well. We may want to revisit once that
>    gets in place.

What are we notifying exactly? Just status for child zones? I think
this is better off being a poll from the parent zone, allowing the
child to function knowing it doesn't even have a parent zone.

>        Seems impractical if *every* command has to do a zone search every
>        time.
> 
>      Besides the zones having replicated info on the instances (I'm assuming
>      each zone has it's own db) the instance_id's could have structure to
>      them (i.e. a URI) which could indicate which zone they live in.

Agreed, and this is something I've been trying to discuss for a
while. :)  All objects (instances, volumens, networks, ...) need
globally unique IDs. I tried pushing for UUIDs but that was rejected. I
think now that we should leave it up to the zone. I think each zone
name should be globally unique, and my opinion is using a DNS name
(although I know some folks disagree on other threads). An instance
name could then be a zone-level unique ID + zone name. Be using DNS, we
can do simple routing without having to cache where every instance is
either just by doing object suffix matching on the list of child zones.

If we don't want to expose this detail (which I know some folks were
opposed to), we basically need to do the same thing but instead add
a layer of abstraction around it, so we can can still have arbitrary
globally unique IDs for zones and objects, and need to consult the
encoding plugin to either route them, or simple cache every object/zone
mapping at the higher level zones.

-Eric



References