← Back to team overview

openstack team mailing list archive

Re: Proposed OpenStack Service Requirements

 

On Wed, Feb 9, 2011 at 1:38 PM, Eric Day <eday@xxxxxxxxxxxx> wrote:
> The email thread with the subject "[RFC] OpenStack Programming Model
> Framework" from Jan 3rd covered a few ground proposals for OpenStack
> projects, mainly with a focus on API. I'd like to expand on this a
> bit more.
>
> I think everyone in is in agreement that each service should default
> to a REST API except for some end-user communication depending
> on the service (for example, MySQL or PG protocol for database
> services). Admin, provisioning, and any other gaps in functionality
> not encapsulated by these application specific protocols should be
> filled in via REST calls.

++

> There is other common functionality we should consider for OpenStack
> services. We don't need anything too formal, just a "best practices"
> document that can change with time. This will hopefully also drive
> openstack-common projects for chosen languages so we can encourage
> code sharing between projects (although not required).

Heh, good luck on this one. It's like:

* herding cats
* pulling teeth
* getting developers to agree on something

All of which are similar to each other.

> The main candidates for discussion are:
>
> * Authc/authz/access - We had a marathon thread about this the other
>  day, not much more to say. I think there is consensus that auth
>  needs to be pluggable for all services and be flexible enough to
>  accommodate various organizational structures.

Sure, though I see this as being very implementation dependent and not
something that is particularly useful as an OpenStack project or
service.  That said, I still maintain that Nova having the ability to
natively understand organizational structures is useful, but like you
said, it was a marathon thread and no need to rehash here ;)

> * Zones and Location URIs for all objects - I think we'll all agree
>  that when working with distributed services, location becomes very
>  import. Issues like bandwidth limitations, latency, and location
>  redundancy are top priority. Swift is already very aware, and work
>  is underway in Nova, but I believe there will be a huge value in
>  making a common location format across services. For example,
>  being able to make requests like "launch a nova instance near
>  this swift object", or if we have a queue and database services,
>  "run these queue workers near a copy of this database".
>
>  In an effort to keep things simple, I propose we simply assign URIs
>  to all zones and objects across services. These would be dynamic,
>  as objects can move around due to fail-over, rebalancing, and so
>  forth. It would be up to the API consumer to request locations
>  (with some reasonable TTL) before using them. What does this mean
>  for services? Just have a 'location' attribute for every zone and
>  object (Swift object, nova instances, nova volume, ...).
>
>  The URI does imply a dotted hierarchy in order to perform suffix
>  matching (for example, zone cage1.dc1.north.service.com would
>  match for *.north.service.com requests). We could keep this field
>  completely free-form and make the location generation/matching
>  pluggable, but URIs seem to be pretty ubiquitous. DNS entries for
>  these URIs are possible but not required.

The main thing I feel is problematic with the above: a zone does not
imply any sort of location at all. No geographic location or
inter-datacenter location is implied for a zone. A zone can just as
easily be a set of hosts or a set of geographically-associated zones
that have a specific service-level agreement defined for them.

In other words, a Zone is merely a container of hosts or other zones.
Nothing more :)

> * Firehouse - So far we've not discussed this too much, but I

Firehose I assume? :)

>  think when we did there was agreement that we need it. As more
>  service come into the picture, we want the ability to combine and
>  multiplex our logs, events, billing information, etc. so we can
>  report per account, per service, and so forth. For example, as
>  a user, I want to be able to see the logs or billing events with
>  all entries from all my services (or filter by service), but as a
>  sysadmin I may want to view per service, or per zone. We may have
>  registered handlers to grab certain events for PuSH notifications
>  too. To maintain maximum flexibility across deployments we need
>  keep the interface generic, the payload can be a JSON object or some
>  more efficient serialized message (this can be pluggable).

Carrot already let's this be pluggable, AFAIK. json, pickled, xml, etc...

>  The only
>  required fields are probably:
>
>  <timestamp> <service> <account_id> <blob>
>
>  Where <blob> is a list of key/value pairs that handlers can
>  perform routing and processing on. For a logging event, blob
>  may be "priority=ERROR, message=oops!" or "priority=information,
>  message=instance X launched". We can keep things really simple and
>  flexible, relying on a set of documented common attributes that
>  common event producers, routers, and handlers can key in on.

Full agreement from me.

-jay



Follow ups

References