← Back to team overview

graphite-dev team mailing list archive

[Question #119934]: Aggregating metrics ?

 

New question #119934 on Graphite:
https://answers.launchpad.net/graphite/+question/119934

Hello,

we are currently considering using graphite to build reports about many metrics in our distributed platform, made of many workers running on many different hosts (currently about 30). We want to record atomic events such as user logins, crawls, etc and build graphs that will allow us to spot errors or problems on our services.

The architecture we came up with is the following:
* our workers will log events using syslog, using a format very close to graphite's own message format: timestamp + metric + value
* each syslogd is configured to forward those messages to a single syslog server that will maintain a FIFO (named pipe) that a feeder script reads, directly forwarding the metric to the carbon listener

The problem with that architecture is that the same events can occur at the same time on different hosts, and that counters should be accumulated up to the minimum data granularity (minute) before being fed to graphite.

Looking at carbon's sources,  metrics seem to be updated and not added to the previous value each time they are pushed. Is there any easy way (maybe in storage-schemas.conf ?) to make particular keys accumulate results (even better, apply functions similar to those available for graphs in the webapp) according to the storage schema granularity ?

The other solution would be to make the feeder script accumulate data for 1 minute and then post the resulting counters in one shot to carbon's listener. I a bit worried with scalability, because of the burst of activity that it will create every minute on the server.

Any other idea ?

Thanks in advance

Erwan

-- 
You received this question notification because you are a member of
graphite-dev, which is an answer contact for Graphite.