← Back to team overview

openstack team mailing list archive

Re: Architecture for Shared Components

 

On Mon, Aug 2, 2010 at 4:31 PM, Jorge Williams <jorge.williams@xxxxxxxxxxxxx
> wrote:

> Hmm..  Let me see if I understand what you're saying.  Correct me if I'm
> wrong here.  You're still advocating a proxy approach where an HTTP request
> is sent from one proxy to another... (Parton my txt drawings)
> *
> *
> ...but you are proposing that individual proxies can make sideways calls to
> make additional service requests...
>
> If so, that's exactly along the lines of what I was thinking.
>

Yep.  I hadn't argued for putting a cache as its own layer, but I think the
bigger picture is the same -- a stack of proxies that can call out sideways.


>
> *                            [  SSL Term  ]     *
> *                                  |            *
> *                                  v            *
> *              +------------>[   Cache    ]*
> *              |                   |*
> *              |                   v*
> *              |             [    Auth    ]--->[ IDM SERVICE ]*
> *              |                   |*
> *              |                   v*
> *              |             [ Rate Limit ]*
> *           (Purge)                |*
> *              |                   v*
> *              |             [API Endpoint]*
> *              |                   |*
> *              |                   v*
> *              |                  ...*
> *              |                   |*
> *              |                   |*
> *              |                   |*
> *              |                   |*
> *              +-------------------+*
>
>
> For example, say a user issues a command to delete a server.  We'll need to
> purge every representation (XML, JSON, XML GZip, JSON GZip) of that server
> from the front end cache.  I suppose we could detect the delete operation at
> the caching stage, but that means having a very customized cache, I'd like
> to reuse that code for different APIs.  What's more certain events may
> trigger cache purges from the backend directly, say a server transition
> state from  "RESIZE" to "ACTIVE".   I really don't see how we can avoid
> these downstream communications entirely.
>

If we made the cache a proxy layer then I would agree that communication
would loop upstream again.  If we made the cache a service, accessible from
any proxy via an API, then we avoid that complexity.  So I'd definitely
argue for the latter approach for caching (as does Eric).  As a
counterexample, I'd keep rate limiting as its own layer, since it's needed
in one place and isn't called regularly by anyone downstream.

Michael

References