graphite-dev team mailing list archive
-
graphite-dev team
-
Mailing list archive
-
Message #01804
[Question #180328]: Aggregator patterns
New question #180328 on Graphite:
https://answers.launchpad.net/graphite/+question/180328
Hi,
I'm using Graphite-0.9.9 from the tarballs. I've configured carbon with a 1s interval for 30 day retention.
[catchall]
priority = 0
pattern = ^.*
retentions = 1s:30
Our cluster has n-machines all sending events to graphite with a 4 tuple path. I've configured the aggregator to further roll up events on 1 sec windows.
<key0>.<key1>.<key2>.<key3> (1) = sum <key0>.<key1>.<key2>.<key3>
I have been seeing intermittent dropped values and finally realized it's due to the way I use the patterns and how the aggregator functions. Since I'm basically trying to aggregate all 4 element paths with the same name in the result I think what's happening is that the aggregate sometimes collides with the original value, since the original is sent over in receiver.py (last line):
events.metricGenerated(metric, datapoint)
This makes sense since I think the intention was to use aggregation with some output pattern indicating the aggregate.
Removing the last line in receiver.py "fixes" our issue, but I was wondering if there was a way essentially override the default metric? I suppose I could change my aggregate pattern to something like:
global.<key0>.<key1>.<key2>.<key3> (1) = sum <key0>.<key1>.<key2>.<key3>
If I could simply enforce overwriting the original non-aggregate metric that would work too.
Thanks,
-Jeff
--
You received this question notification because you are a member of
graphite-dev, which is an answer contact for Graphite.