openstack team mailing list archive
-
openstack team
-
Mailing list archive
-
Message #21051
Re: Suggestions for shared-storage cluster file system
We have used Gluster for small deployments, but lately we have changed our
mind. Basically we have bet on Gluster for 2013 because of:
- 10GbE everywhere, and Gluster MUST run in 10GbE (or Infiniband)
- 3.3 release fixes some issues when locking big files: Granular locking
- libgfapi reborn, no more FUSE overhead
- QEMU 1.3 comes with GlusterFS block driver
http://raobharata.wordpress.com/2012/10/29/qemu-glusterfs-native-integration/
- Success cases everywhere
Regarding our tests, NetApp and Nexenta outperforms Gluster, but let's say
we now can live with this performance penalty because cost per bit and
horizontal scalability are really good.
Cheers
Diego
--
Diego Parrilla
<http://www.stackops.com/>*CEO*
*www.stackops.com | * diego.parrilla@xxxxxxxxxxxx** | +34 649 94 43 29 |
skype:diegoparrilla*
* <http://www.stackops.com/>
*
*
On Tue, Feb 19, 2013 at 9:57 PM, Razique Mahroua
<razique.mahroua@xxxxxxxxx>wrote:
> Hey Marco,
> have you been able to run some performance test on your Gluster cluster?
>
> Thanks :)
>
> *Razique Mahroua** - **Nuage & Co*
> razique.mahroua@xxxxxxxxx
> Tel : +33 9 72 37 94 15
>
>
> Le 18 févr. 2013 à 14:20, Marco CONSONNI <mcocmo62@xxxxxxxxx> a écrit :
>
> Hello Sam,
>
> I've tried two of them: NFS and Gluster.
>
> Some problems with the former (migration didn't work properly), no problem
> with the latter.
> I vote for Gluster.
>
> Hope it helps,
> Marco.
>
>
>
> On Fri, Feb 15, 2013 at 4:40 PM, Samuel Winchenbach <swinchen@xxxxxxxxx>wrote:
>
>> Hi All,
>>
>> Can anyone give me a recommendation for a good shared-storage cluster
>> filesystem? I am running kvm-libvirt and would like to enable live
>> migration.
>>
>> I have a number of hosts (up to 16) each with 2xTB drives. These hosts
>> are also my compute/network/controller nodes.
>>
>> The three I am considering are:
>>
>> GlusterFS - I have the most experience with this, and it seems the
>> easiest.
>>
>> CephFS/RADOS - Interesting because glance supports the rbd backend.
>> Slightly worried because of this though "Important:
>> Mount the CephFS filesystem on the client machine, not the cluster
>> machine."
>> (I wish it said why...) and "CephFS is not quite as stable as the block
>> device and the object storage gateway."
>> Lustre - A little hesitant now that Oracle is involved with it.
>>
>> If anyone has any advice, or can point out another that I should consider
>> it would be greatly appreciated.
>>
>> Thanks!
>>
>> Sam
>>
>>
>> _______________________________________________
>> Mailing list: https://launchpad.net/~openstack
>> Post to : openstack@xxxxxxxxxxxxxxxxxxx
>> Unsubscribe : https://launchpad.net/~openstack
>> More help : https://help.launchpad.net/ListHelp
>>
>>
> _______________________________________________
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@xxxxxxxxxxxxxxxxxxx
> Unsubscribe : https://launchpad.net/~openstack
> More help : https://help.launchpad.net/ListHelp
>
>
>
> _______________________________________________
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@xxxxxxxxxxxxxxxxxxx
> Unsubscribe : https://launchpad.net/~openstack
> More help : https://help.launchpad.net/ListHelp
>
>
Follow ups
References