← Back to team overview

openstack team mailing list archive

Re: Suggestions for shared-storage cluster file system

 

On Fri, Feb 15, 2013 at 1:08 PM, JR <botemout@xxxxxxxxx> wrote:

> Is there anyone using GPFS (General Parallel Filesystem) from IBM.  It's
> high performing, posix compliant, can do internal replication, etc...?
>
> To make it work, would one simply have to modify the nova-volume (or
> cinder) code that creates a volume-group using the corresponding GPFS
> commands?  Or, are there other complexities?  Which code would I look in
> to see what's involved?
>
> JR
>
>
> On 2/15/2013 2:54 PM, Samuel Winchenbach wrote:
> > Thanks,
> >
> > I think I will go with GlusterFS.   MooseFS looks interesting, but
> > maintaining a package outside the repo/cloud archive is not something I
> > want to deal with.
> >
> > Along the same lines...   is it possible to mount a GlusterFS volume in
> > pacemaker?  I have tried both ocf:heartbeat:Filesystem and
> > ocf:redhat:netfs.sh without much luck.   I have managed to get the
> > service started with upstart though.
> >
> > Thanks,
> > Sam
> >
> >
> > On Fri, Feb 15, 2013 at 2:29 PM, Sébastien Han <han.sebastien@xxxxxxxxx
> > <mailto:han.sebastien@xxxxxxxxx>> wrote:
> >
> >     Hi,
> >
> >
> >         Important: Mount the CephFS filesystem on the client machine,
> >         not the cluster machine.
> >
> >
> >     It's just like NFS, if you mount an NFS export on the NFS server,
> >     you get kernel locks.
> >
> >     Unfortunately even if love Ceph far more than the other, I won't go
> >     with CephFS, at least not know. But if are in a hurry and looking
> >     for a DFS then GlusterFS seems to be a good candidate. NFS works
> >     pretty well too.
> >
> >     Cheers.
> >
> >     --
> >     Regards,
> >     Sébastien Han.
> >
> >
> >     On Fri, Feb 15, 2013 at 4:49 PM, JuanFra Rodriguez Cardoso
> >     <juanfra.rodriguez.cardoso@xxxxxxxxx
> >     <mailto:juanfra.rodriguez.cardoso@xxxxxxxxx>> wrote:
> >
> >         Another one:
> >
> >          - MooseFS
> >         (
> http://docs.openstack.org/trunk/openstack-compute/admin/content/installing-moosefs-as-backend.html
> )
> >          - GlusterFS
> >          - Ceph
> >          - Lustre
> >
> >         Regards,
> >         JuanFra
> >
> >
> >         2013/2/15 Samuel Winchenbach <swinchen@xxxxxxxxx
> >         <mailto:swinchen@xxxxxxxxx>>
> >
> >             Hi All,
> >
> >             Can anyone give me a recommendation for a good
> >             shared-storage cluster filesystem?   I am running
> >             kvm-libvirt and would like to enable live migration.
> >
> >             I have a number of hosts (up to 16) each with 2xTB drives.
> >              These hosts are also my compute/network/controller nodes.
> >
> >             The three I am considering are:
> >
> >             GlusterFS - I have the most experience with this, and it
> >             seems the easiest.
> >
> >             CephFS/RADOS - Interesting because glance supports the rbd
> >             backend.  Slightly worried because of this though "Important:
> >             Mount the CephFS filesystem on the client machine, not the
> >             cluster machine."
> >             (I wish it said why...)  and "CephFS is not quite as stable
> >             as the block device and the object storage gateway."
> >             Lustre - A little hesitant now that Oracle is involved with
> it.
> >
> >
> >             If anyone has any advice, or can point out another that I
> >             should consider it would be greatly appreciated.
> >
> >             Thanks!
> >
> >             Sam
> >
> >
> >             _______________________________________________
> >             Mailing list: https://launchpad.net/~openstack
> >             Post to     : openstack@xxxxxxxxxxxxxxxxxxx
> >             <mailto:openstack@xxxxxxxxxxxxxxxxxxx>
> >             Unsubscribe : https://launchpad.net/~openstack
> >             More help   : https://help.launchpad.net/ListHelp
> >
> >
> >
> >         _______________________________________________
> >         Mailing list: https://launchpad.net/~openstack
> >         Post to     : openstack@xxxxxxxxxxxxxxxxxxx
> >         <mailto:openstack@xxxxxxxxxxxxxxxxxxx>
> >         Unsubscribe : https://launchpad.net/~openstack
> >         More help   : https://help.launchpad.net/ListHelp
> >
> >
> >
> >
> >
> > _______________________________________________
> > Mailing list: https://launchpad.net/~openstack
> > Post to     : openstack@xxxxxxxxxxxxxxxxxxx
> > Unsubscribe : https://launchpad.net/~openstack
> > More help   : https://help.launchpad.net/ListHelp
> >
>
> _______________________________________________
> Mailing list: https://launchpad.net/~openstack
> Post to     : openstack@xxxxxxxxxxxxxxxxxxx
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
Don't know about folks that might have their own implementation, but
currently nothing in Cinder.  The closest thing to use to get an idea of
the driver is the pending Gluster work:
https://review.openstack.org/#/c/21342/

John

References