graphite-dev team mailing list archive
-
graphite-dev team
-
Mailing list archive
-
Message #02412
[Question #193210]: configuration suggestion for 400k metrics
New question #193210 on Graphite:
https://answers.launchpad.net/graphite/+question/193210
First of all, Thanks you for creating Graphite, it offers huge amount of flexibility and easy to plotting timebase graph.
Used to work with MRTG, RRD, Cacti, I admit that i feel more relax to work with Graphite.
Recently, the number of metrics that we feed to one machine carbon-cache has been almost triple than it used to be, and it will be more in near future. I have noticed that the graph start breaking-the line of the graph doesn't look smooth like when i had only 100k metrics.
Currently, the spec of carbon-cache (0.9.9) is
- Intel Xeon 2.6G 24cores
- 24G Ram
- 1x1.1TB 7200rpm SATA
I just get another server with the same spec. that i can use together with the first box, and hope that once i add this machine in, it would help share the load and graph will look nice again.
Question:
1. what would be the good setup for those 2 servers ? i am thinking to have the existing box to have carbon-relay + carbon-cache, and 1 or 2 carbon-cache on the new host.
2. How fast (number of metric/sec) the listener of carbon-relay can be?
Right now, the poller is using GNU parallel running every 40sec to get the metrics from near 1k machines producing almost 400K metrics feeding to carbon-cache in one batch. Are 400k metrics injecting into carbon-cache in one batch considered bad practice? should i break it into smaller chunks and submit them chunk by chunk ?
- Patrick
--
You received this question notification because you are a member of
graphite-dev, which is an answer contact for Graphite.