← Back to team overview

graphite-dev team mailing list archive

Re: [Question #219215]: Large install recommendations

 

Question #219215 on Graphite changed:
https://answers.launchpad.net/graphite/+question/219215

Ben Whaley proposed the following answer:
We are currently handling ~270k metrics/minute spread among 6 carbon-
caches, each of which is a single EC2 m1.large instance. Since storage
on EC2 is notoriously slow, I adjusted MAX_UPDATES_PER_SECOND=30 which
significantly reduced load. Whisper files are on ephemeral disk in
RAID0. I'm seeing some high load issues but it's tolerable. Even with 7
instances (1 relay+6 caches)  it's less than 2/3 the cost of an EC2 SSD
instance.

Still, using consistent hashing is painful when I need to scale out. I'm
not looking forward to the next (big) round of metrics I'm anticipating
since I'll need to move loads of whisper files around. I've found and
used whisper-clean.py but I need to build something more complete. From
what I've read, Ceres as a backend doesn't sound very promising.

Or maybe I should just bite the bullet and move to an SSD instance.

-- 
You received this question notification because you are a member of
graphite-dev, which is an answer contact for Graphite.


References