graphite-dev team mailing list archive
-
graphite-dev team
-
Mailing list archive
-
Message #01454
Re: [Question #170794]: carbon-cache.py at its limit?
Question #170794 on Graphite changed:
https://answers.launchpad.net/graphite/+question/170794
Cody Stevens posted a new comment:
I didn't see as much performance gain as I had hoped by switching to
RAID 10 and ext4. The real problem is that I am way overallocated as
far as metrics are concerned. I scaled back what I could live without
for the moment and the 2 servers are at almost full %util almost
constantly. I left 1 server as RAID 6 xfs filesystem and the other is
now RAID 10 ext4... identical hardware. Currently I have about 77k/min
going to the RAID 6 machine and about 91K/min going to the RAID box.
Fortunately, I have some new servers on the way with 15k RPM drives
which I will configure with RAID 10 and ext4 which should help
immensely. During my tweaking I noticed if I switched metrics to go to
a different cache server and creates were necessary the cache would
immediately hit the limit and never recover to a "normal" phase
afterwards. At this point I am going to chalk that up to being
overextended on my resources. One other side effect I noticed is with a
script I have. It runs on both machines and basically creates symbolic
links to metrics in one tree so that we can have another tree that gives
us a different way to view the metrics. On the machine with the xfs
filesystem it runs in less than a minute. On the ext4 box it runs for
15 minutes or more. Do you think this is due to the journaling
differences between the 2 fs types?
--
You received this question notification because you are a member of
graphite-dev, which is an answer contact for Graphite.