yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #63317
[Bug 1671422] Re: charms: nova/cinder/ceph rbd integration broken on Ocata
** No longer affects: nova
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1671422
Title:
charms: nova/cinder/ceph rbd integration broken on Ocata
Status in OpenStack cinder-ceph charm:
Fix Released
Status in OpenStack Charm Guide:
Fix Released
Status in OpenStack nova-compute charm:
Fix Released
Bug description:
https://github.com/openstack/nova/commit/b89efa3ef611a1932df0c2d6e6f30315b5111a57
introduced a change in Ocata where any data provided by cinder for rbd
block devices is preferred over any local libvirt sectional
configuration for rbd (which was used in preference in the past).
As a result, its not possible to attach ceph block devices in
instances in a charm deployed Ocata; the secret_uuid configuration is
not populated in the cinder configuration file, and in any case the
username on the compute units won't match the username for ceph being
used on the cinder units (as compute and cinder units get different
keys created) so I don't think the key created on the compute units
will actually work with the username provided from cinder.
I'm not 100% convinced this is a great change in behaviour; the cinder
and nova keys have much the same permissions for correct operation
(rwx on images, volumes and vms groups) however it does mean that the
nova-compute units have to have the same keys as the cinder units. A
key disclosure/compromise on a cinder unit would require revoke and
re-issue across a large number of units (as compute units are likely
to be 100-1000's whereas the number of cinder units will be minimal.
To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-cinder-ceph/+bug/1671422/+subscriptions