← Back to team overview

openstack team mailing list archive

Re: [ceph-users] Openstack with Ceph, boot from volume

 

Hi Weiguo,

my answers are inline.

-martin

On 30.05.2013 21:20, w sun wrote:
> I would suggest on nova compute host (particularly if you have
> separate compute nodes),
>
> (1) make sure "rbd ls -l -p " works and /etc/ceph/ceph.conf is
> readable by user nova!!
yes to both
> (2) make sure you can start up a regular ephemeral instance on the
> same nova node (ie, nova-compute is working correctly)
an ephemeral instance is working
> (3) if you are using cephx, make sure libvirt secret is set up correct
> per instruction at ceph.com
I do not use cephx
> (4) look at /var/lib/nova/instance/xxxxxxxxxxxxx/libvirt.xml and the
> disk file is pointing to the rbd volume
For an ephemeral instance the folder is create, for a volume bases
instance the folder is not created.

> (5) If all above look fine and you still couldn't perform nova boot
> with the volume,  you can try last thing to manually start up a kvm
> session with the volume similar to below. At least this will tell you
> if you qemu has the correct rbd enablement.
>
>               /usr/bin/kvm -m 2048 -drive
> file=rbd:ceph-openstack-volumes/volume-3f964f79-febe-4251-b2ba-ac9423af419f,index=0,if=none,id=drive-virtio-disk0
> -boot c -net nic -net user -nographic  -vnc :1000 -device
> piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
>
If I start kvm by hand it is working.

> --weiguo
>
> > Date: Thu, 30 May 2013 16:37:40 +0200
> > From: martin@xxxxxxxxxxxx
> > To: ceph-users@xxxxxxxx
> > CC: openstack@xxxxxxxxxxxxxxxxxxx
> > Subject: [ceph-users] Openstack with Ceph, boot from volume
> >
> > Hi Josh,
> >
> > I am trying to use ceph with openstack (grizzly), I have a multi
> host setup.
> > I followed the instruction
> http://ceph.com/docs/master/rbd/rbd-openstack/.
> > Glance is working without a problem.
> > With cinder I can create and delete volumes without a problem.
> >
> > But I cannot boot from volumes.
> > I doesn't matter if use horizon or the cli, the vm goes to the error
> state.
> >
> > From the nova-compute.log I get this.
> >
> > 2013-05-30 16:08:45.224 ERROR nova.compute.manager
> > [req-5679ddfe-79e3-4adb-b220-915f4a38b532
> > 8f9630095810427d865bc90c5ea04d35 43b2bbbf5daf4badb15d67d87ed2f3dc]
> > [instance: 059589a3-72fc-444d-b1f0-ab1567c725fc] Instance failed block
> > device setup
> > .....
> > 2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
> > 059589a3-72fc-444d-b1f0-ab1567c725fc] ConnectionError: [Errno 101]
> > ENETUNREACH
> >
> > What tries nova to reach? How could I debug that further?
> >
> > Full Log included.
> >
> > -martin
> >
> > Log:
> >
> > ceph --version
> > ceph version 0.61 (237f3f1e8d8c3b85666529860285dcdffdeda4c5)
> >
> > root@compute1:~# dpkg -l|grep -e ceph-common -e cinder
> > ii ceph-common 0.61-1precise
> > common utilities to mount and interact with a ceph storage
> > cluster
> > ii python-cinderclient 1:1.0.3-0ubuntu1~cloud0
> > python bindings to the OpenStack Volume API
> >
> >
> > nova-compute.log
> >
> > 2013-05-30 16:08:45.224 ERROR nova.compute.manager
> > [req-5679ddfe-79e3-4adb-b220-915f4a38b532
> > 8f9630095810427d865bc90c5ea04d35 43b2bbbf5daf4badb15d67d87ed2f3dc]
> > [instance: 059589a3-72fc-444d-b1f0-ab1567c725fc] Instance failed block
> > device setup
> > 2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
> > 059589a3-72fc-444d-b1f0-ab1567c725fc] Traceback (most recent call last):
> > 2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
> > 059589a3-72fc-444d-b1f0-ab1567c725fc] File
> > "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1071,
> > in _prep_block_device
> > 2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
> > 059589a3-72fc-444d-b1f0-ab1567c725fc] return
> > self._setup_block_device_mapping(context, instance, bdms)
> > 2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
> > 059589a3-72fc-444d-b1f0-ab1567c725fc] File
> > "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 721, in
> > _setup_block_device_mapping
> > 2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
> > 059589a3-72fc-444d-b1f0-ab1567c725fc] volume =
> > self.volume_api.get(context, bdm['volume_id'])
> > 2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
> > 059589a3-72fc-444d-b1f0-ab1567c725fc] File
> > "/usr/lib/python2.7/dist-packages/nova/volume/cinder.py", line 193,
> in get
> > 2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
> > 059589a3-72fc-444d-b1f0-ab1567c725fc]
> > self._reraise_translated_volume_exception(volume_id)
> > 2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
> > 059589a3-72fc-444d-b1f0-ab1567c725fc] File
> > "/usr/lib/python2.7/dist-packages/nova/volume/cinder.py", line 190,
> in get
> > 2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
> > 059589a3-72fc-444d-b1f0-ab1567c725fc] item =
> > cinderclient(context).volumes.get(volume_id)
> > 2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
> > 059589a3-72fc-444d-b1f0-ab1567c725fc] File
> > "/usr/lib/python2.7/dist-packages/cinderclient/v1/volumes.py", line 180,
> > in get
> > 2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
> > 059589a3-72fc-444d-b1f0-ab1567c725fc] return self._get("/volumes/%s"
> > % volume_id, "volume")
> > 2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
> > 059589a3-72fc-444d-b1f0-ab1567c725fc] File
> > "/usr/lib/python2.7/dist-packages/cinderclient/base.py", line 141,
> in _get
> > 2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
> > 059589a3-72fc-444d-b1f0-ab1567c725fc] resp, body =
> > self.api.client.get(url)
> > 2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
> > 059589a3-72fc-444d-b1f0-ab1567c725fc] File
> > "/usr/lib/python2.7/dist-packages/cinderclient/client.py", line 185,
> in get
> > 2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
> > 059589a3-72fc-444d-b1f0-ab1567c725fc] return self._cs_request(url,
> > 'GET', **kwargs)
> > 2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
> > 059589a3-72fc-444d-b1f0-ab1567c725fc] File
> > "/usr/lib/python2.7/dist-packages/cinderclient/client.py", line 153, in
> > _cs_request
> > 2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
> > 059589a3-72fc-444d-b1f0-ab1567c725fc] **kwargs)
> > 2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
> > 059589a3-72fc-444d-b1f0-ab1567c725fc] File
> > "/usr/lib/python2.7/dist-packages/cinderclient/client.py", line 123, in
> > request
> > 2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
> > 059589a3-72fc-444d-b1f0-ab1567c725fc] **kwargs)
> > 2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
> > 059589a3-72fc-444d-b1f0-ab1567c725fc] File
> > "/usr/lib/python2.7/dist-packages/requests/api.py", line 44, in request
> > 2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
> > 059589a3-72fc-444d-b1f0-ab1567c725fc] return
> > session.request(method=method, url=url, **kwargs)
> > 2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
> > 059589a3-72fc-444d-b1f0-ab1567c725fc] File
> > "/usr/lib/python2.7/dist-packages/requests/sessions.py", line 279, in
> > request
> > 2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
> > 059589a3-72fc-444d-b1f0-ab1567c725fc] resp = self.send(prep,
> > stream=stream, timeout=timeout, verify=verify, cert=cert,
> proxies=proxies)
> > 2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
> > 059589a3-72fc-444d-b1f0-ab1567c725fc] File
> > "/usr/lib/python2.7/dist-packages/requests/sessions.py", line 374,
> in send
> > 2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
> > 059589a3-72fc-444d-b1f0-ab1567c725fc] r = adapter.send(request,
> > **kwargs)
> > 2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
> > 059589a3-72fc-444d-b1f0-ab1567c725fc] File
> > "/usr/lib/python2.7/dist-packages/requests/adapters.py", line 206,
> in send
> > 2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
> > 059589a3-72fc-444d-b1f0-ab1567c725fc] raise ConnectionError(sockerr)
> > 2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
> > 059589a3-72fc-444d-b1f0-ab1567c725fc] ConnectionError: [Errno 101]
> > ENETUNREACH
> > 2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
> > 059589a3-72fc-444d-b1f0-ab1567c725fc]
> > 2013-05-30 16:08:45.329 AUDIT nova.compute.manager
> > [req-5679ddfe-79e3-4adb-b220-915f4a38b532
> > 8f9630095810427d865bc90c5ea04d35 43b2bbbf5daf4badb15d67d87ed2f3dc]
> > [instance: 059589a3-72fc-444d-b1f0-ab1567c725fc] Terminating instance
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@xxxxxxxxxxxxxx
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


References