← Back to team overview

openstack team mailing list archive

Re: two or more NFS / gluster mounts

 

I may of course be entirely wrong :) which would be cool if this
is achievable / on the roadmap.

At the very least if this is not already in discussion I'd raise it on
launchpad as a potential feature.




On Thu, Dec 20, 2012 at 3:19 PM, Andrew Holway <a.holway@xxxxxxxxxxxx>wrote:

> Ah shame. You can specify different storage domains in oVirt.
>
> On Dec 20, 2012, at 4:16 PM, David Busby wrote:
>
> > Hi Andrew,
> >
> > An interesting idea, but I am unaware if nova supports storage affinity
> in any way, it does support host affinity iirc, as a kludge you could have
> say some nova compute nodes using your "slow mount" and reserve the "fast
> mount" nodes as required, perhaps even defining separate zones for
> deployment?
> >
> > Cheers
> >
> > David
> >
> >
> >
> >
> >
> > On Thu, Dec 20, 2012 at 2:53 PM, Andrew Holway <a.holway@xxxxxxxxxxxx>
> wrote:
> > Hi David,
> >
> > It is for nova.
> >
> > Im not sure I understand. I want to be able to say to openstack;
> "openstack, please install this instance (A) on this mountpoint and please
> install this instance (B) on this other mountpoint." I am planning on
> having two NFS / Gluster based stores, a fast one and a slow one.
> >
> > I probably will not want to say please every time :)
> >
> > Thanks,
> >
> > Andrew
> >
> > On Dec 20, 2012, at 3:42 PM, David Busby wrote:
> >
> > > Hi Andrew,
> > >
> > > Is this for glance or nova ?
> > >
> > > For nova change:
> > >
> > > state_path = /var/lib/nova
> > > lock_path = /var/lib/nova/tmp
> > >
> > > in your nova.conf
> > >
> > > For glance I'm unsure, may be easier to just mount gluster right onto
> /var/lib/glance (similarly could do the same for /var/lib/nova).
> > >
> > > And just my £0.02 I've had no end of problems getting gluster to "play
> nice" on small POC clusters (3 - 5 nodes, I've tried nfs tried glusterfs,
> tried 2 replica N distribute setups with many a random glusterfs death), as
> such I have opted for using ceph.
> > >
> > > ceph's rados can also be used with cinder from the brief reading I've
> been doing into it.
> > >
> > >
> > > Cheers
> > >
> > > David
> > >
> > >
> > >
> > >
> > >
> > > On Thu, Dec 20, 2012 at 1:53 PM, Andrew Holway <a.holway@xxxxxxxxxxxx>
> wrote:
> > > Hi,
> > >
> > > If I have /nfs1mount and /nfs2mount or /nfs1mount and /glustermount
> can I control where openstack puts the disk files?
> > >
> > > Thanks,
> > >
> > > Andrew
> > >
> > > _______________________________________________
> > > Mailing list: https://launchpad.net/~openstack
> > > Post to     : openstack@xxxxxxxxxxxxxxxxxxx
> > > Unsubscribe : https://launchpad.net/~openstack
> > > More help   : https://help.launchpad.net/ListHelp
> > >
> >
> >
> >
>
>
>

Follow ups

References