← Back to team overview

graphite-dev team mailing list archive

[Question #237074]: Graphite over spot instances scaling

 

New question #237074 on Graphite:
https://answers.launchpad.net/graphite/+question/237074

i have over a hundred servers that are sending metrics to my statsd-graphite setup. A leaf out of the metric subtree is something like-

stats.dev.medusa.ip-10-0-30-61.jaguar.v4.outbox.get
stats.dev.medusa.ip-10-0-30-62.jaguar.v4.outbox.get

Most of my crawlers are AWS spot-instances, which implies that 20s of them go down and up randomly, being allocated different IP addresses every time. This implies that the same list becomes-

stats.dev.medusa.ip-10-0-30-6.  <|subtree
stats.dev.medusa.ip-10-0-30-1.  <|subtree
stats.dev.medusa.ip-10-0-30-26.<|subtree
stats.dev.medusa.ip-10-0-30-21.<|subtree

Assuming all the metrics under the subtree in total store 4G metrics, 20 spot instances going down and later 30 of them spawning up with different IP addresses implies that my storage suddenly puffs up by 120G. Moreover, this is a weekly occurrence.

While it is simple and straightforward to delete the older IP-subtrees, but i really want to retain the metrics. i can have- 
3 medusas in week0, 
23 in week1, 
15 in week2, 
40 in week4. 

What can be my options? How would you tackle this?

-- 
You received this question notification because you are a member of
graphite-dev, which is an answer contact for Graphite.