← Back to team overview

openstack team mailing list archive

Re: Allowing clients to pass capability requests through tags?

 

Hi Sandy,

I agree with using tags for full scheduler selection, it's something
I've been pushing for from the start. The request contains any number
of k/v pairs, the services provide any number of k/v pairs, and the
scheduler performs a match (some required, some optional, ...). I see
the URI/zone as one of those tags, not something we need to overload
to contain all of the capabilities. It should only be a hierarchical
"location", which may be geographic location, organizational location
(dept, ...), or some other type (however you decide to construct
your zones).

For example, imagine a dynamic del.icio.us tag that allowed for domain
name filtering on bookmarks (give me all bookmarks with tags [book
review] [domain:slashdot.org]). For Nova, this means issuing requests
like "create instance with [GPU] [Fast disk] [zone:dc1.example.com]".

The important thing is that this is not a tag specific to a particular
service. For example, Swift would never care or need to understand a
'GPU' tag, but it can share and understand zone tags.

-Eric

On Fri, Feb 11, 2011 at 12:40:44PM +0000, Sandy Walsh wrote:
> Heh, hate to be the one to bust up the URI love-fest :)
> 
> The issue I have with a single URI being used as the heuristic for node selection is that it is very rigid.
> 
> Different business units have different views on the network:
> * Operations may view it as geography/data centers.
> * Consumers may view it as technical ability (gpu's, fast disk, good inter-server speed, etc)
> * Sales/marketing may view it as the number of martinis they can buy ;)
> 
> Trees become unmanageable/hard to visualize for users beyond a couple hundred nodes. We are lucky that our geographical/DC-based hierarchy is relatively flat. This is why I was initially pushing for a tag-based system for selection (aka Zone/Host Capabilities).
> 
> Consider the way delicio.us works. They manage many millions of URL's and tags are an effective way to slice & dice your way through the data: 
> "Show me all the URL's on [OpenStack] [Python] [Zones] [Scheduler]" ... blam.
> 
> This is also the way the old Trader services worked:
> "I want a [wax transfer] [color] printer that can has [30ppm] and [300dpi] on [Floor 2]"
> 
> "Near" simply has to mean the distance in zones from the most-optimal zones, based on the tags.
> 
> "I want a new instance with [GPU] and [Fast Disk] [Good inter-instance network speed] [near] [DRW] [DC1]"
> * where "[near]" implies "as close as possible to" in zone distance.
> 
> Personally I don't like overloading the zone name to have a "meaningful" URI when we can get the same functionality with Capabilities/Tags already. And we already know we need Capability support anyway. Especially if it means enforcing a rigid hierarchy.
> 
> $0.02
> 
> -S
> 
> 
> 
> ________________________________________
> From: openstack-bounces+sandy.walsh=rackspace.com@xxxxxxxxxxxxxxxxxxx [openstack-bounces+sandy.walsh=rackspace.com@xxxxxxxxxxxxxxxxxxx] on behalf of Eric Day [eday@xxxxxxxxxxxx]
> Sent: Friday, February 11, 2011 4:30 AM
> To: Justin Santa Barbara
> Cc: openstack@xxxxxxxxxxxxxxxxxxx; Devin Carlen
> Subject: Re: [Openstack] Allowing clients to pass capability requests through tags?
> 
> The main reason I was proposing full location/zone of objects is to
> allow this type of 'near' scheduling to happen without understanding
> what the actual object is. For example, imagine we want to start an
> instance near a particular swift object. We could query the swift
> object and in the metadata there could be a 'zone' tag (well, three,
> one for each copy). For example:
> 
> get swift-12345: zone=rack12.room2.dc1.dfw.rackspace.com
> 
> I can now use that zone name to:
> 
> create_instance: openstack:near=rack12.room2.dc1.dfw.rackspace.com
> 
> The deployment can decide what 'near' is (perhaps a measure of link
> speed or latency). This way a particular deployment that uses the
> same URI/zone names across projects can account for locality without
> knowing what objects from different services are. If it were just
> 'near=swift-12345', it would need to understand what a swift object
> was and perform that lookup to find out where it is.
> 
> So you can still grab a zone tag from a volume you created:
> 
> get vol-000001: rack4.room2.dc1.dfw.rackspace.com
> 
> and use the zone to launch an instance with:
> 
> create_instance: openstack:near=rack4.room2.dc1.dfw.rackspace.com
> 
> We can also write schedulers/tools for a particular deployment
> that understands the zones to just say 'always prefer in
> dc1.dfw.rackspace.com', because power is cheaper there right now, or
> 'test.dc1.dfw.rackspace.com' because that is my test zone (perhaps
> only enabled for certain accounts in the scheduler too).
> 
> -Eric
> 
> On Thu, Feb 10, 2011 at 03:38:42PM -0800, Justin Santa Barbara wrote:
> >    I think the blueprint was largely complementary to the multi-zone stuff;
> >    this is more about how the client _requests_ a particular
> >    location/capability through the API.  The multi-zone blueprint seems to be
> >    more about how nova would satisfy those requests (in a non-trivial zone
> >    structure.)
> >    The root motivator is indeed getting a 'good' connection to a storage
> >    volume.  I'm thinking of iSCSI SAN storage here, so in my case this
> >    probably means the SAN device with the least number of switches in
> >    between.  There could well be SAN devices in each rack (e.g. Solaris
> >    volume nodes), or the devices could even be running on the host nodes, and
> >    I don't believe that zones in the EC2 sense are sufficient here.
> >    But I guess that if the zone hierarchy went all the way down to the rack
> >    (or machine), that would work.  So I could create a volume and it would
> >    come back with a location of "rack4.room2.dc1.dfw.rackspace.com" and I
> >    could then request allocation of machines in that same rack?  Is that the
> >    vision of the nested zones?
> >    I do have a concern that long-term if we _only_ use zones, that's trying
> >    to multiplex a lot of information into the zone hierarchy, and we can
> >    really only put one attribute in there.  I also like the flexibility of
> >    the 'openstack:near=vol-000001' request, because then the cloud can decide
> >    how near to place the instance based on its knowledge of the topology, and
> >    the clients can be oblivious to the storage system and arrangement.  But,
> >    my immediate requirement would indeed be satisfied if the zones went down
> >    to the rack/machine level.
> >    An alternative way to look at zones and instance-types is that they're
> >    actually just fail-if-not-satisfiable tags of the creation request
> >    (openstack:+zone=us-east-1a and openstack:+instancetype=m1.large)  They're
> >    only distinguished attributes because AWS doesn't have an
> >    extensibility mechanism, which this blueprint would give us.
> >    Justin
> >
> >    On Thu, Feb 10, 2011 at 3:12 PM, Devin Carlen <devcamcar@xxxxxx> wrote:
> >
> >      I haven't totally digested this blueprint yet but it seems like there is
> >      some overlap with what is being discussed with the multi zone metadata
> >      stuff.  One approach might be to handle this awt the scheduler level
> >      though and try to ensure things are always in the same zone when
> >      appropriate.
> >      I think the bigger question you raise is how to request local volumes
> >      when possible, yes?
> >
> >      Devin
> >      On Feb 10, 2011, at 3:37 PM, Justin Santa Barbara <justin@xxxxxxxxxxxx>
> >      wrote:
> >
> >        Does anyone have any thoughts/objections on the blueprint I posted for
> >        allowing clients to pass capability-requests through tags?  I'm
> >        planning on starting implementation soon, so if people think this is a
> >        bad idea I'd rather know before I start coding!
> >        Blueprint: https://blueprints.launchpad.net/nova/+spec/use-metadata-tags-for-capabilities
> >        Wiki: https://blueprints.launchpad.net/nova/+spec/use-metadata-tags-for-capabilities
> >        And a quick TLDR:
> >        API clients need a way to request e.g. placement of machines near each
> >        other / near volumes, or that a volume be created with a particular
> >        RAID level, or that a machine be created in a HIPAA compliant
> >        environment.  (This is complementary to the work on hierarchical zones
> >        & URL naming, I believe)
> >        I propose using the instance tags for this, e.g. specifying
> >        openstack:near=vol-000001 when creating an instance to request
> >        locating the instance 'close to' that volume.
> >        By default these requests would be best-effort and ignored-if-unknown;
> >        if the client wants to specify that something is required and should
> >        fail if not understood or not satisfiable, they could use a "+" e.g.
> >        openstack:+location=*.dc1.north.rackspace.com
> >        Controversially (?), this would not be supported for clients using the
> >        AWS API, because tags can only be specified once the instance has
> >        already been created.
> >        Feedback appreciated!
> >        Justin
> >
> >        _______________________________________________
> >        Mailing list: https://launchpad.net/~openstack
> >        Post to     : openstack@xxxxxxxxxxxxxxxxxxx
> >        Unsubscribe : https://launchpad.net/~openstack
> >        More help   : https://help.launchpad.net/ListHelp
> 
> > _______________________________________________
> > Mailing list: https://launchpad.net/~openstack
> > Post to     : openstack@xxxxxxxxxxxxxxxxxxx
> > Unsubscribe : https://launchpad.net/~openstack
> > More help   : https://help.launchpad.net/ListHelp
> 
> 
> _______________________________________________
> Mailing list: https://launchpad.net/~openstack
> Post to     : openstack@xxxxxxxxxxxxxxxxxxx
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
> 
> 
> Confidentiality Notice: This e-mail message (including any attached or
> embedded documents) is intended for the exclusive and confidential use of the
> individual or entity to which this message is addressed, and unless otherwise
> expressly indicated, is confidential and privileged information of Rackspace.
> Any dissemination, distribution or copying of the enclosed material is prohibited.
> If you receive this transmission in error, please notify us immediately by e-mail
> at abuse@xxxxxxxxxxxxx, and delete the original message.
> Your cooperation is appreciated.



Follow ups

References