openstack team mailing list archive
-
openstack team
-
Mailing list archive
-
Message #00003
Architecture for Shared Components
Hi Everyone,
A number of us have been discussing different ways of sharing
components between different projects in OpenStack, such as auth*,
caching, rate limiting, and so on. There are a few ways to approach
this, and we thought it would be best to put this out on the mailing
list for folks to discuss.
The two approaches proposed so far are a proxy layer that would
sit in front of the service APIs and a library/module that would
encapsulate these shared components and allow the services to consume
it at any layer. The problem we are trying to solve is re-usability
across different services, as well as a layer that can scale along
with the service. I'm leaning towards the library approach so I'll
explain that side.
The basic idea is to provide a set of libraries or modules that could
be reused and expose a common API each service can consume (auth,
caching, ...). They will be modular themselves in that they could have
different backends providing each service. These interfaces will need
to be written for each language we plan to support (or written once
in something like C and write extensions on top of it). Tools like
SWIG can help in this area.
The reasoning behind this approach over the proxy is that you're not
forced to answer questions out of context. Having the appropriate
amount of context, and doing checks at the appropriate layer, are key
in building efficient systems that scale. If we use the proxy model,
we will inevitably need to push more service-specific context up
into that layer to handle requests efficiently (URL structure for the
service, peeking into the JSON/XML request to interpret the request
parameters, and so on). I think questions around authorization and
cached responses can sometimes best be handled deeper in the system.
If we have this functionality wrapped in a library, we can make
calls from the service software at any layer (when the context is
relevant). We still solve the re-usability problem, but in a way that
can both be more efficient and doesn't require context to bubble up
into a generic proxy layer.
As for scalability, the libraries provided can use any methods needed
to ensure they scale across projects. For example, if we're talking
about authentication systems, the module can manage caching, either
local or network based, and still perform any optimizations it needs
to. The library may expose a simple API to the applications, but it
can have it's own scalable architecture underneath.
The service API software will already need the ability to scale out
horizontally, so I don't see this as a potential bottleneck. For
example, in Nova, the API servers essentially act as a HTTP<->message
queue proxy, so you can easily start up as many as is needed with
some form of load balancing in front and workers on the other side
of the queues carry out the bulk of the work. Having the service API
also handle tasks like rate limiting and auth should not be an issue.
You could even write a generic proxy layer for services that need it
based on the set of libraries we would use elsewhere in the system.
Having worked on systems that took both approaches in the past, I can
say the library approach was both more efficient and maintainable. I'm
sure we can make either work, but I want to make sure we consider
alternatives and think through the specifics a bit more first.
Thanks,
-Eric
Follow ups