← Back to team overview

yahoo-eng-team team mailing list archive

[Bug 1255536] Re: Havana-ephemeral-rbd not working for non-admin ceph user

 

** Changed in: nova
       Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1255536

Title:
  Havana-ephemeral-rbd not working for non-admin ceph user

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  We are using Havana with Ubuntu distribution with the following
  version,

         nova-compute                      1:2013.2-0ubuntu1~cloud0

  It seems that nova booting from image by using ceph/rbd as the image
  backend store is not functioning if the cephx auth user is not admin.
  Please see the log attached at the end of this report.

  Looking at the code at
  https://github.com/openstack/nova/blob/master/nova/virt/libvirt/utils.py,
  it seems that two rbd functions are implemented with default ceph/rbd
  admin user only and rbd_user parameter in nova.conf is not honored,

         list_rbd_volumes(pool)
         remove_rbd_volumes(pool, *names):

  ---------------------------------------------------------------------
  2013-11-27 11:01:17.374 9105 DEBUG nova.block_device [req-c25a18df-8365-4804-8009-09eafc647c16 b893a14998d44dd1be27117ed47954ac e5ba1977bc1849179c7e7430be7496d2] block_device_list [] volume_in_mapping /usr/lib/python2.7/dist-packages/nova/block_device.py:496
  2013-11-27 11:01:17.375 9105 INFO nova.virt.libvirt.driver [req-c25a18df-8365-4804-8009-09eafc647c16 b893a14998d44dd1be27117ed47954ac e5ba1977bc1849179c7e7430be7496d2] [instance: bf9351c7-5cb6-41e7-bd82-6d89f703dda0] Creating image
  2013-11-27 11:01:17.376 9105 DEBUG nova.openstack.common.processutils [req-c25a18df-8365-4804-8009-09eafc647c16 b893a14998d44dd1be27117ed47954ac e5ba1977bc1849179c7e7430be7496d2] Running cmd (subprocess): rbd -p svl-stack-mgmt-openstack-volumes ls execute /usr/lib/python2.7/dist-packages/nova/openstack/common/processutils.py:147
  2013-11-27 11:01:17.399 9105 DEBUG nova.openstack.common.processutils [req-c25a18df-8365-4804-8009-09eafc647c16 b893a14998d44dd1be27117ed47954ac e5ba1977bc1849179c7e7430be7496d2] Result was 1 execute /usr/lib/python2.7/dist-packages/nova/openstack/common/processutils.py:172
  2013-11-27 11:01:17.401 9105 ERROR nova.compute.manager [req-c25a18df-8365-4804-8009-09eafc647c16 b893a14998d44dd1be27117ed47954ac e5ba1977bc1849179c7e7430be7496d2] [instance: bf9351c7-5cb6-41e7-bd82-6d89f703dda0] Instance failed to spawn
  2013-11-27 11:01:17.401 9105 TRACE nova.compute.manager [instance: bf9351c7-5cb6-41e7-bd82-6d89f703dda0] Traceback (most recent call last):
  2013-11-27 11:01:17.401 9105 TRACE nova.compute.manager [instance: bf9351c7-5cb6-41e7-bd82-6d89f703dda0]   File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1407, in _spawn
  2013-11-27 11:01:17.401 9105 TRACE nova.compute.manager [instance: bf9351c7-5cb6-41e7-bd82-6d89f703dda0]     block_device_info)
  2013-11-27 11:01:17.401 9105 TRACE nova.compute.manager [instance: bf9351c7-5cb6-41e7-bd82-6d89f703dda0]   File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 2063, in spawn
  2013-11-27 11:01:17.401 9105 TRACE nova.compute.manager [instance: bf9351c7-5cb6-41e7-bd82-6d89f703dda0]     admin_pass=admin_password)
  2013-11-27 11:01:17.401 9105 TRACE nova.compute.manager [instance: bf9351c7-5cb6-41e7-bd82-6d89f703dda0]   File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 2353, in _create_image
  2013-11-27 11:01:17.401 9105 TRACE nova.compute.manager [instance: bf9351c7-5cb6-41e7-bd82-6d89f703dda0]     project_id=instance['project_id'])
  2013-11-27 11:01:17.401 9105 TRACE nova.compute.manager [instance: bf9351c7-5cb6-41e7-bd82-6d89f703dda0]   File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/imagebackend.py", line 172, in cache
  2013-11-27 11:01:17.401 9105 TRACE nova.compute.manager [instance: bf9351c7-5cb6-41e7-bd82-6d89f703dda0]     if not self.check_image_exists() or not os.path.exists(base):
  2013-11-27 11:01:17.401 9105 TRACE nova.compute.manager [instance: bf9351c7-5cb6-41e7-bd82-6d89f703dda0]   File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/imagebackend.py", line 499, in check_image_exists
  2013-11-27 11:01:17.401 9105 TRACE nova.compute.manager [instance: bf9351c7-5cb6-41e7-bd82-6d89f703dda0]     rbd_volumes = libvirt_utils.list_rbd_volumes(self.pool)
  2013-11-27 11:01:17.401 9105 TRACE nova.compute.manager [instance: bf9351c7-5cb6-41e7-bd82-6d89f703dda0]   File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/utils.py", line 270, in list_rbd_volumes
  2013-11-27 11:01:17.401 9105 TRACE nova.compute.manager [instance: bf9351c7-5cb6-41e7-bd82-6d89f703dda0]     out, err = utils.execute('rbd', '-p', pool, 'ls')
  2013-11-27 11:01:17.401 9105 TRACE nova.compute.manager [instance: bf9351c7-5cb6-41e7-bd82-6d89f703dda0]   File "/usr/lib/python2.7/dist-packages/nova/utils.py", line 177, in execute
  2013-11-27 11:01:17.401 9105 TRACE nova.compute.manager [instance: bf9351c7-5cb6-41e7-bd82-6d89f703dda0]     return processutils.execute(*cmd, **kwargs)
  2013-11-27 11:01:17.401 9105 TRACE nova.compute.manager [instance: bf9351c7-5cb6-41e7-bd82-6d89f703dda0] Command: rbd -p svl-stack-mgmt-openstack-volumes ls
  2013-11-27 11:01:17.401 9105 TRACE nova.compute.manager [instance: bf9351c7-5cb6-41e7-bd82-6d89f703dda0] Exit code: 1
  2013-11-27 11:01:17.401 9105 TRACE nova.compute.manager [instance: bf9351c7-5cb6-41e7-bd82-6d89f703dda0] Stdout: ''
  2013-11-27 11:01:17.401 9105 TRACE nova.compute.manager [instance: bf9351c7-5cb6-41e7-bd82-6d89f703dda0] Stderr: "2013-11-27 11:01:17.397471 7fd488fd2780 -1 monclient(hunting): authenticate NOTE: no keyring found; disabled cephx authentication\n2013-11-27 11:01:17.397490 7fd488fd2780  0 librados: client.admin authentication error (95) Operation not supported\nrbd: couldn't connect to the cluster!\n"
  2013-11-27 11:01:17.401 9105 TRACE nova.compute.manager [instance: bf9351c7-5cb6-41e7-bd82-6d89f703dda0] 

  --------------------------

  I have the following in nova.conf on nova compute node,

           # storage configuration for default nova boot 
           libvirt_images_type=rbd
           libvirt_images_rbd_pool=svl-stack-mgmt-openstack-volumes
           libvirt_images_rbd_ceph_conf=/etc/ceph/ceph.conf
           rbd_user=svl-stack-mgmt-openstack-volumes
           rbd_secret_uuid=e7724cad-32f8-d4ce-6c68-5f40491b15dd

  I have the following in /etc/ceph/

        -rw-r--r--  1 root root  266 Nov 27 01:09 ceph.client.svl-stack-mgmt-openstack-volumes.keyring
        -rw-r--r--  1 root root  494 Nov 25 23:28 ceph.conf

  For virsh secrets, I have

  > root@svl-cc-nova1-002:/var/log/nova# virsh secret-list
        UUID                                 Usage
        -----------------------------------------------------------
        e7724cad-32f8-d4ce-6c68-5f40491b15dd Unused

  > root@svl-cc-nova1-002:/var/log/nova# virsh secret-get-value e7724cad-32f8-d4ce-6c68-5f40491b15dd
            AQDSQZBR6CDwLhAAt99+BdfsdffMQw63dBId7A==

  I am able to execute the following command without error on the nova
  compute node,

  > rbd ls -p svl-stack-mgmt-openstack-volumes --id svl-stack-mgmt-
  openstack-volumes

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1255536/+subscriptions