← Back to team overview

openstack team mailing list archive

Re: Ceph + Nova

 

JuanFra, 

I do use cephfs in production, but not for the /var/lib/instances directory. I do host the openstack database and the openstack configuration files on it for an HA cloud controller cluster, but I am probably crazier than most people, and I have a very small deployment. I currently have not had any problems with it, but due to the size of my cloud, I can afford to be very hands-on with it. 

The reason I have not hosted the /var/lib/instances directory is due to the fact that the data gets a lot more activity than my small database does. Instead, I prefer to perform block migrations rather than live ones until cephfs becomes more stable. 


Dave Spano 
Optogenics 
Systems Administrator 


----- Original Message -----

From: "Sébastien Han" <han.sebastien@xxxxxxxxx> 
To: "JuanFra Rodríguez Cardoso" <juanfra.rodriguez.cardoso@xxxxxxxxx> 
Cc: "Openstack" <openstack@xxxxxxxxxxxxxxxxxxx>, "ceph-devel" <ceph-devel@xxxxxxxxxxxxxxx> 
Sent: Wednesday, November 21, 2012 4:03:48 AM 
Subject: Re: [Openstack] Ceph + Nova 

Hi, 

I don't think it's the best place to ask your question since it's not 
directly related to OpenStack but more about Ceph. I just put in c/c 
the ceph ML. Anyway, CephFS is not ready yet for production but I 
heard that some people use it. People from Inktank (the company behind 
Ceph) don't recommend it, AFAIR they expect something more production 
ready for Q2 2013. You can use it (I did, for testing purpose) but 
it's at your own risk. 
Beside of this RBD and RADOS are robust and stable now, so you can go 
with the Cinder and Glance integration without any problems. 

Cheers! 

On Wed, Nov 21, 2012 at 9:37 AM, JuanFra Rodríguez Cardoso 
<juanfra.rodriguez.cardoso@xxxxxxxxx> wrote: 
> Hi everyone: 
> 
> I'd like to know your opinion as nova experts: 
> 
> Would you recommend CephFS as shared storage in /var/lib/nova/instances? 
> Another option it would be use GlusterFS or MooseFS for 
> /var/lib/nova/instances directory and Ceph RBD for Glance and Nova volumes, 
> don't you think? 
> 
> Thanks for your attention. 
> 
> Best regards, 
> JuanFra 
> 
> _______________________________________________ 
> Mailing list: https://launchpad.net/~openstack 
> Post to : openstack@xxxxxxxxxxxxxxxxxxx 
> Unsubscribe : https://launchpad.net/~openstack 
> More help : https://help.launchpad.net/ListHelp 
> 
-- 
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in 
the body of a message to majordomo@xxxxxxxxxxxxxxx 
More majordomo info at http://vger.kernel.org/majordomo-info.html 


Follow ups

References