openstack team mailing list archive
-
openstack team
-
Mailing list archive
-
Message #23450
Turnstile updates
Greetings. I've been working on some scalability enhancements to
Turnstile[1], and I believe it's about time to announce that work here.
I'm hoping that people here find it useful, not to mention help with the
final debugging :)
Turnstile is a distributed rate-limiting middleware, which replaces
Nova's built-in RateLimitingMiddleware with a version that can apply
rate limiting across multiple nova-api nodes. (Turnstile itself is
actually more general, and can be used for rate limiting with any WSGI
application.) It uses an external Redis server for storing data about
requests. To use Turnstile with Nova requires the nova_limits[2]
package (another such package exists for using Turnstile with Keystone;
I'm hoping the developer of that package will chime in with the
appropriate link, since I've forgotten it…).
My recent work has focused on enhancing Turnstile's scalability; in
particular, I've been working on sharding the ephemeral request data
across multiple Redis servers. To do that, it will be necessary to use
a Redis proxy called Nutcracker[3]. Turnstile is not 100% compatible
with Nutcracker, but fortunately the incompatible bits can be worked
around easily, and so NutJob[4] was created. The final piece of the
scalability work I have is Subway[5], which allows the rate limit
configuration to be mirrored across multiple Redis servers. (Why not
use Redis's master/slave? Well, Subway also forwards the messages that
are used to notify Turnstile of when the limits configuration needs to
be reloaded.)
Here's hoping ya'll find these projects useful!
[1] https://github.com/klmitch/turnstile
[2] https://github.com/klmitch/nova_limits
[3] Also known as twemproxy; https://github.com/twitter/twemproxy
[4] Yeah, I know, bad pun; https://github.com/klmitch/nutjob
[5] Because it carries rate limit configuration from Turnstile to
Turnstile: https://github.com/klmitch/subway
--
Kevin L. Mitchell <kevin.mitchell@xxxxxxxxxxxxx>