openstack team mailing list archive
-
openstack team
-
Mailing list archive
-
Message #15081
Re: Ceph performance as volume & image store?
Thanks for the pointers. I've recently read Sebastien Han's page,
full of lots of good info saying what you can get to work and pointing
to some bumps on the way.
The ceph mailing list link was also very interesting, and would
definitely head over there for tuning advice.
My main purpose in posting here was to see if I could get a "yeah I've
got that working for a 100 (or 1,000 or 100,00) node openstack
implementation" or conversely an "oh $DIETY stay away that ate my
world".
Not hearing either and realizing I could repurpose the same hardware
for swift and or nova-volume nodes I'll probably jump in with both
feet while I'm still calling it Beta.
Thanks,
-Jon
On Tue, Jul 24, 2012 at 9:15 PM, Anne Gentle <anne@xxxxxxxxxxxxx> wrote:
> I don't know if it will confirm or correlate with your findings, but
> do take a look at this blog post with benchmarks in one of the last
> sections:
>
> http://www.sebastien-han.fr/blog/2012/06/10/introducing-ceph-to-openstack/
>
> I'm trying to determine what parts should go into the OpenStack
> documentation, please let me know if the post is useful to you in your
> setting and what sections are most valuable.
> Thanks,
> Anne
>
>
> On Tue, Jul 24, 2012 at 6:08 PM, Josh Durgin <josh.durgin@xxxxxxxxxxx> wrote:
>> On 07/23/2012 08:24 PM, Jonathan Proulx wrote:
>>>
>>> Hi All,
>>>
>>> I've been looking at Ceph as a storage back end. I'm running a
>>> research cluster and while people need to use it and want it 24x7 I
>>> don't need as many nines as a commercial customer facing service does
>>> so I think I'm OK with the current maturity level as far as that goes,
>>> but I have less of a sense of how far along performance is.
>>>
>>> My OpenStack deployment is 768 cores across 64 physical hosts which
>>> I'd like to double in the next 12 months. What it's used for is
>>> widely varying and hard to classify some uses are hundreds of tiny
>>> nodes others are looking to monopolize the biggest physical system
>>> they can get. I think most really heavy IO currently goes to our NAS
>>> servers rather than through nova-volumes but that could change.
>>>
>>> Anyone using ceph at that scale (or preferably larger)? Does it keep
>>> up if you keep throwing hardware at it? My proof of concept ceph
>>> cluster on crappy salvaged hardware has proved the concept to me but
>>> has (unsurprisingly) crappy salvaged performance. Trying to get a
>>> sense of what performance expectations I should have given decent
>>> hardware before I decide if I should buy decent hardware for it...
>>>
>>> Thanks,
>>> -Jon
>>
>>
>> Hi Jon,
>>
>> You might be interested in Jim Schutt's numbers on better hardware:
>>
>> http://comments.gmane.org/gmane.comp.file-systems.ceph.devel/7487
>>
>> You'll probably get more response on the ceph mailing list though.
>>
>> Josh
>>
>>
>> _______________________________________________
>> Mailing list: https://launchpad.net/~openstack
>> Post to : openstack@xxxxxxxxxxxxxxxxxxx
>> Unsubscribe : https://launchpad.net/~openstack
>> More help : https://help.launchpad.net/ListHelp
References