← Back to team overview

graphite-dev team mailing list archive

Re: [Question #59255]: Carbon and the web interface, multiple-server setup

 

Question #59255 on Graphite changed:
https://answers.launchpad.net/graphite/+question/59255

chrismd posted a new comment:
Don't worry you're not being lame, you've actually got impeccable
timing! I just made a post on the Graphite wiki about this earlier
today: see http://graphite.wikidot.com/

Essentially your assessment of how using multiple backends works is
correct (with the current stable release). As mentioned in the wiki
post, I am working on a federated storage model such that you can split
your data across multiple machines and have them share data at the
application level rather than at the filesystem level. One thing I would
like to clarify about that though is that you will still need to decide
which servers will be storing which metrics, you can't send any data
points to just any server, they have to go to the server(s) that are
supposed to have all the data for that metric. This will be facilitated
by a new daemon I'm writing as a part of the upcoming release called
carbon-data-router.py. This will actually be the daemon your clients
send all of their data to and it will route the data points to the
appropriate backends based on your configuration.

Regarding MEMCACHE_HOSTS, that is actually passed right along to Django,
which does the caching for the webapp. The cached data does get spread
across all the hosts listed, but the list must be the same on all
servers (the order has to be the same too).

As for the REMOTE_RENDERING feature, I consider this one to be of
questionable value for most people. I implemented it to solve a specific
problem at Orbitz because of restrictions on the hardware we had
available. Here is how it works:

Say you have server A running carbon, so it has all the actual data. But
perhaps this server is an old Sun machine with lots of cores but really
low clock speed and a shared FPU, so it is really really slow at
rendering (this was the case at Orbitz). So imagine you have server B
that is a fast x86 Linux machine that renders very quickly but for some
silly reason isn't allowed to be connected to the fast storage array
that the Sun machine is (ahem, Orbitz). This is where REMOTE_RENDERING
is useful. On server A you put REMOTE_RENDERING = ["serverB"] and this
will cause server A to proxy the requests it receives on to serverB
instead of rendering them locally, however the key thing is that the
proxied requests will have the data bundled along with them so serverB
does not actually need to have access to the data itself. This may sound
weird, and it is. There is really no *good* reason to be stuck in this
situation, what should have happened was that we simply connect the fast
x86 Linux machine to the fast disk array, but that was impossible for
political reasons. Note that when using REMOTE_RENDERING, *all*
rendering requests get proxied. Graphs are only rendered locally if the
remote servers become unavailable or the remote request times out. While
it might be useful to modify this functionality to delegate only some
requests to the remote servers (to spread out load) I have never
actually run into a situation in which a fast modern machine couldn't
keep up with the rendering (assuming memcached is in use).

That said, REMOTE_RENDERING will become pretty much useless once
federated storage is finished because you will be able to scale both the
frontend and the backend horizontally by adding servers.

-- 
You received this question notification because you are a member of
graphite-dev, which is an answer contact for Graphite.