← Back to team overview

graphite-dev team mailing list archive

Re: request for comments - using amqp and replacing carbon-relay

 


--- On Fri, 2/12/10, Chris Davis <chrismd@xxxxxxxxx> wrote:

> From: Chris Davis <chrismd@xxxxxxxxx>
> Subject: Re: [Graphite-dev] request for comments - using amqp and replacing carbon-relay
> To: "Brinley, Chris" <Chris.Brinley@xxxxxxxxxx>
> Cc: "graphite-dev" <graphite-dev@xxxxxxxxxxxxxxxxxxx>
> Date: Friday, February 12, 2010, 8:57 PM
> We seem to be going down very similar
> paths. I was actually thinking of having all of
> Graphite's APIs (rendering graphs, fetching data,
> searching the hierarchy, etc) accessible via AMQP, maybe
> using something like Thrift. It would also be a great way to
> add administrative controls to carbon (graceful shutdown,
> start/stop listeners, clear cache, reload config, etc...).
> But thats a ways down the road still. 
> 
> -Chris
> 


One thing I'm working on is supplementing the Cairo image generation
with something more interacive called Open Flash Chart.   To do
this I'll need to create an interface to fetch data from
graphite (cache and whisper file) in a json format to feed to the OFC
widget.  A simple REST interface specifying metric and start/stop times
is my plan.

Open Flash Chart 2  is at http://teethgrinder.co.uk/open-flash-chart-2/
for those interested.

-allan



> On Fri, Feb 12, 2010 at 4:11 PM,
> Brinley, Chris <Chris.Brinley@xxxxxxxxxx>
> wrote:
> 
> 
> 
> 
> 
> 
> 
> Internally working towards replacing the links
> between the persistence layer and the routing layer with
> AMQP. The idea here would be you have certain sets of
> servers that fill a logical storage role
>  such as "dev metrics" or "bussiness
> metrics" should be tied to a message queue of the same
> "topic". Adding and removing capacity in each
> logical domain then becomes a matter of connecting to the
> queue. Also be between storage layer and presentation layer
> I
>  am working on an implementatation along the same
> lines:
>  
> presenation
> layer requests data about metrics X,Y,Z from the data
> proivder queue. There are some concerns I have with managing
> response times here but that's the 20K foot
> view.
> 
>  
> this also
> starts to make graphite more serivce oriented in that
> arbitrary app can consume data about metrics.
>  
> Chris your
> familiar with the infrastructure, but i agree sending
> directly to the storage layer via AMQP would be optimal.
> thats probably not going to happen in our case so i think
> adding it in between the routing
>  and storage layers now and side steping carbon entirely
> in the future may a more practical path for us.
>  
> 
> 
> Chris Brinley
> 
> 
> 
> 
> From:
> graphite-dev-bounces+chris.brinley=orbitz.com@xxxxxxxxxxxxxxxxxxx
> [graphite-dev-bounces+chris.brinley=orbitz.com@xxxxxxxxxxxxxxxxxxx]
> On Behalf Of Chris Davis [chrismd@xxxxxxxxx]
> 
> 
> Sent: Friday, February 12, 2010 4:00 PM
> 
> To: graphite-dev
> 
> Subject: [Graphite-dev] request for comments - using
> amqp and replacing carbon-relay
> 
> 
> 
> 
> 
> Hey everyone, recently we have gotten some really
> great community contributions, one of which adds support for
> the AMQP messaging protocol to carbon. I have started
> migrating some of my systems to using this and I think it is
> a great fit for graphite.
>  If you aren't familiar with it already I highly
> recommend reading http://blogs.digitar.com/jjww/2009/01/rabbits-and-warrens/ for
> a brief introduction.
> 
> 
> 
> One area I think AMQP can be especially useful is with
> a clustered graphite installation. Most of you are probably
> not familiar with Graphite's clustering capabilities
> because I have not documented it at all (sorry, hope to
> change that soon). But essentially,
>  you just setup N different independent graphite servers
> and then configure the webapps to know about one another.
> They then share data in a manner transparent to the user,
> making each server look like one big graphite installation
> instead of several independent
>  ones. The tricky part is partitioning your metrics across
> the servers. Thus far I've solved this problem with a
> daemon called carbon-relay.py, which basically acts as an
> application-level load balancer for your metrics. You are
> supposed to send all of your
>  data to carbon-relay, who then looks up some rules
> you've given it to decide which carbon-cache(s) to relay
> each metric on to.
> 
> 
> 
> With AMQP, there seems to be a much simpler way to
> solve this problem. Topic exchanges use a dot-delimited
> naming structure that supports pattern matching very similar
> to graphite's. Basically, you could just publish
> messages containing a value and a timestamp
>  and use the metric name as the routing key. Then each
> carbon-cache can consume from the exchange with one or more
> binding patterns. For the simplest case of having only one
> server the binding pattern would simply be "#"
> (which is the same as using a fanout
>  exchange). For more complex cases you could control what
> data goes to what server by means of configuring each
> carbon-cache's binding patterns. This would effectively
> replace carbon-relay, and I believe, solve the problem in a
> more robust way.
> 
> 
> 
> 
> This is a bit different than they way it currently
> works in trunk so I wanted to run it by everyone and see
> what your thoughts are, especially if you are already using
> AMQP or carbon-relay. I am currently in the process of
> testing this configuration and
>  if it works well I will try it in my production system. If
> that goes well, I would like to include the new behavior in
> this month's release. So please send me any comments,
> questions, concerns, etc.
> 
> 
> 
> If you don't plan on using AMQP, that's fine
> too, the old interface is not going away.
> 
> 
> 
> -Chris
> 
> 
> 
> 
> 
> 
> -----Inline Attachment Follows-----
> 
> _______________________________________________
> Mailing list: https://launchpad.net/~graphite-dev
> Post to     : graphite-dev@xxxxxxxxxxxxxxxxxxx
> Unsubscribe : https://launchpad.net/~graphite-dev
> More help   : https://help.launchpad.net/ListHelp
>




Follow ups

References