graphite-dev team mailing list archive
-
graphite-dev team
-
Mailing list archive
-
Message #05357
Re: [Question #260936]: Running multiple carbon caches and rabbitmq
Question #260936 on Graphite changed:
https://answers.launchpad.net/graphite/+question/260936
Description changed to:
Hi guys,
I want to run monitoring stack based on collectd+carbon+rabbitmq however
I have few questions.
I will be pulling quite a lot of metrics about 500hosts to monitor with
interval of 1 minute (some important metrics will be collected more
often, not less than 10secs) so I assume about 200k metrics/minute. Do I
need multiple carbon caches to handle this amount of traffic? Or a
single carbon cache should be enough?
Second question is about RabbitMQ. By default carbon creates exclusive
queues - I want to use rabbit for zero downtime - when server with
carbon will die rabbitmq will be collecting the metrics. But when carbon
disconnects, the exclusive queue dissapears. Why by defualt carbon
creates exclusive queue? I can possibly change the code of the amqp
section in carbon .py files but is it a good idea?
There were also performance problems with txamqp plugin in earlier
version of carbon, are those problems still occuring? And is the pickle
protocol still best option for scalability?
--
You received this question notification because you are a member of
graphite-dev, which is an answer contact for Graphite.