← Back to team overview

openstack team mailing list archive

Re: Glance with Swift backend auth failure using Keystone

 

OK, having changed my swift_store_auth_address to point to my keystone URL, when trying to upload an image to swift using glance, I now receive the following error:

root@nova:~/images/ubuntu-11.10# glance -v -d -A glance111213141516171819 add name="Ubuntu 11.10 ramdisk" disk_format=ari container_format=ari is_public=true < initrd.img-3.0.0-12-server
Failed to add image. Got error:
400 Bad Request

The server could not comply with the request since it is either malformed or otherwise incorrect.

 Error uploading image: (AttributeError): 'NoneType' object has no attribute 'find'
Note: Your image metadata may still be in the registry, but the image's status will likely be 'killed'.
Completed in 0.3898 sec.

This is identical to the error received when using the swift CLI to interact with the bucket store, when you don't force swift to use version 2 authentication, e.g.

root@nova:~/images/ubuntu-11.10# swift -A http://173.23.181.1:5000/v2.0 -U glance:glance -K glance stat -v
Traceback (most recent call last):
  File "/usr/bin/swift", line 1939, in <module>
    error_queue)
  File "/usr/bin/swift", line 1446, in st_stat
    headers = conn.head_account()
  File "/usr/bin/swift", line 904, in head_account
    return self._retry(None, head_account)
  File "/usr/bin/swift", line 876, in _retry
    self.http_conn = self.http_connection()
  File "/usr/bin/swift", line 864, in http_connection
    return http_connection(self.url)
  File "/usr/bin/swift", line 165, in http_connection
    parsed = urlparse(url)
  File "/usr/lib/python2.7/urlparse.py", line 135, in urlparse
    tuple = urlsplit(url, scheme, allow_fragments)
  File "/usr/lib/python2.7/urlparse.py", line 174, in urlsplit
    i = url.find(':')
AttributeError: 'NoneType' object has no attribute 'find'

Indeed the stack trace in the api.log file (from the glance command) is identical.  As opposed to forcing swift to use version 2 authentication, e.g.

root@nova:~/images/ubuntu-11.10# swift -V 2 -A http://173.23.181.1:5000/v2.0 -U glance:glance -K glance stat -v
StorageURL: http://173.23.181.2:8080/v1/AUTH_4
Auth Token: glance111213141516171819
   Account: AUTH_4
Containers: 1
   Objects: 0
     Bytes: 0
Accept-Ranges: bytes
X-Trans-Id: tx4c0be8b526434ee5b0d07c6cfa8ddb8f

As a reminder, my OpenStack installation is based on the PPA packages maintained by ManagedIT.

I seem to remember reading in someone's blog that Swift is pretty dumb in trying to negotiation authentication version.  This being the case, how to you force version 2 auth in the glance/swift config files?  Or am I completely in the weeds right now?

Thanks in advance,
Ross



On Feb 13, 2012, at 6:44 PM, Hancock, Tom (HP Cloud Services) wrote:

Hi Ross,
        Try this, use: http://173.23.181.1:5000/v2.0
for your swift address swift_store_auth_address.
Your swift client (glance) needs to do just what the swift
command is doing also i.e. communicate with keystone.

I hope this helps,
Tom

---

Tomas Hancock, HP Cloud Services, Hewlett Packard, Galway. Ireland +353-91-754765

Postal Address   : Hewlett Packard Galway Limited, European Software Centre, Ballybrit Business Park, Galway, Ireland
Registered Office: Hewlett Packard Galway Limited, 63-74 Sir John Rogerson's Quay, Dublin 2 Registered Number: 361933

The contents of this message and any attachments to it are confidential and may be legally privileged. If you have received this message in error you should delete it from your system immediately and advise the sender. To any recipient of this message within HP, unless otherwise stated, you should consider this message and attachments as "HP CONFIDENTIAL".

From: openstack-bounces+tom.hancock=hp.com@xxxxxxxxxxxxxxxxxxx<mailto:openstack-bounces+tom.hancock=hp.com@xxxxxxxxxxxxxxxxxxx> [mailto:openstack-bounces+tom.hancock=hp.com@xxxxxxxxxxxxxxxxxxx<mailto:hp.com@xxxxxxxxxxxxxxxxxxx>] On Behalf Of Lillie Ross-CDSR11
Sent: 13 February 2012 22:03
To: openstack@xxxxxxxxxxxxxxxxxxx<mailto:openstack@xxxxxxxxxxxxxxxxxxx>
Subject: [Openstack] Glance with Swift backend auth failure using Keystone

As one of the last steps in bringing up a multi node OpenStack testbed, I'm trying to integrate a Swift backend with Glance, all of which is using Keystone for authorization. Unfortunately, when trying to upload images using glance I receive and authorization error.  Background, Glance/Keystone are running on node addressed 173.23.181.1.  Swift proxy is running on 173.23.181.2.
When configured with file backend, glance works fine, using Keystone authentication.

Glance storage backend is configured as follows (from glance-api.conf)

# Address where the Swift authentication service lives
swift_store_auth_address = http://173.23.181.2:8080/v1.0

# User to authenticate against the Swift authentication service
swift_store_user = glance:glance

# Auth key for the user authenticating against the
# Swift authentication service
swift_store_key = glance111213141516171819

# Container within the account that the account should use
# for storing images in Swift
swift_store_container = images

For debugging, I've verified that my swift installation is working with Keystone.  For example

root@nova:~/images/ubuntu-11.10# swift -V 2 -A http://173.23.181.1:5000/v2.0 -U glance:glance -K glance stat -v
StorageURL: http://173.23.181.2:8080/v1/AUTH_4
Auth Token: glance111213141516171819
   Account: AUTH_4
Containers: 1
   Objects: 0
     Bytes: 0
Accept-Ranges: bytes
X-Trans-Id: tx0f4a557d0e3046f1a4f8d10180d55e0b

and I'm able to create/delete buckets and files with no problems.  However, when attempting to upload and image file using glance, I receive the following error

root@nova:~/images/ubuntu-11.10# glance -A glance111213141516171819 add name="Ubuntu 11.10 ramdisk" disk_format=ari container_format=ari is_public=true < initrd.img-3.0.0-12-server
Failed to add image. Got error:
400 Bad Request

The server could not comply with the request since it is either malformed or otherwise incorrect.

 Error uploading image: (ClientException): Auth GET failed: http://173.23.181.2:8080/v1.0 401 Unauthorized
Note: Your image metadata may still be in the registry, but the image's status will likely be 'killed'.

and the output from the log files is shown below:

root@nova:~/images/ubuntu-11.10# more /var/log/glance/api.log
2012-02-13 15:46:55    DEBUG [glance.api.middleware.version_negotiation] Processing request: POST /v1/images Accept:
2012-02-13 15:46:55    DEBUG [glance.api.middleware.version_negotiation] Matched versioned URI. Version: 1.0
2012-02-13 15:46:55    DEBUG [root] HTTP PERF: 0.02184 seconds to GET 173.23.181.1:35357 /v2.0/tokens/glance111213141516171819)
2012-02-13 15:46:55    DEBUG [root] HTTP PERF: 0.01876 seconds to GET 173.23.181.1:35357 /v2.0/tokens/glance111213141516171819)
2012-02-13 15:46:55    DEBUG [routes.middleware] Matched POST /images
2012-02-13 15:46:55    DEBUG [routes.middleware] Route path: '/images', defaults: {'action': u'create', 'controller': <glance.common.wsgi.Resource object at 0x1d9ce50>}
2012-02-13 15:46:55    DEBUG [routes.middleware] Match dict: {'action': u'create', 'controller': <glance.common.wsgi.Resource object at 0x1d9ce50>}
2012-02-13 15:46:55    DEBUG [glance.registry] Adding image metadata...
2012-02-13 15:46:55    DEBUG [glance.registry]      container_format: ari
2012-02-13 15:46:55    DEBUG [glance.registry]           disk_format: ari
2012-02-13 15:46:55    DEBUG [glance.registry]             is_public: True
2012-02-13 15:46:55    DEBUG [glance.registry]              min_disk: 0
2012-02-13 15:46:55    DEBUG [glance.registry]               min_ram: 0
2012-02-13 15:46:55    DEBUG [glance.registry]                  name: Ubuntu 11.10 ramdisk
2012-02-13 15:46:55    DEBUG [glance.registry]                  size: 13638383
2012-02-13 15:46:55    DEBUG [glance.registry]                status: queued
2012-02-13 15:46:55    DEBUG [glance.registry] Returned image metadata from call to RegistryClient.add_image():
2012-02-13 15:46:55    DEBUG [glance.registry]              checksum: None
2012-02-13 15:46:55    DEBUG [glance.registry]      container_format: ari
2012-02-13 15:46:55    DEBUG [glance.registry]            created_at: 2012-02-13T21:46:55
2012-02-13 15:46:55    DEBUG [glance.registry]               deleted: False
2012-02-13 15:46:55    DEBUG [glance.registry]            deleted_at: None
2012-02-13 15:46:55    DEBUG [glance.registry]           disk_format: ari
2012-02-13 15:46:55    DEBUG [glance.registry]                    id: 28
2012-02-13 15:46:55    DEBUG [glance.registry]             is_public: True
2012-02-13 15:46:55    DEBUG [glance.registry]              location: None
2012-02-13 15:46:55    DEBUG [glance.registry]              min_disk: 0
2012-02-13 15:46:55    DEBUG [glance.registry]               min_ram: 0
2012-02-13 15:46:55    DEBUG [glance.registry]                  name: Ubuntu 11.10 ramdisk
2012-02-13 15:46:55    DEBUG [glance.registry]                 owner: 4
2012-02-13 15:46:55    DEBUG [glance.registry]                  size: 13638383
2012-02-13 15:46:55    DEBUG [glance.registry]                status: queued
2012-02-13 15:46:55    DEBUG [glance.registry]            updated_at: None
2012-02-13 15:46:55    DEBUG [glance.api.v1.images] Setting image 28 to status 'saving'
2012-02-13 15:46:55    DEBUG [glance.registry] Updating image metadata for image 28...
2012-02-13 15:46:55    DEBUG [glance.registry]                status: saving
2012-02-13 15:46:55    DEBUG [glance.registry] Returned image metadata from call to RegistryClient.update_image():
2012-02-13 15:46:55    DEBUG [glance.registry]              checksum: None
2012-02-13 15:46:55    DEBUG [glance.registry]      container_format: ari
2012-02-13 15:46:55    DEBUG [glance.registry]            created_at: 2012-02-13T21:46:55
2012-02-13 15:46:55    DEBUG [glance.registry]               deleted: False
2012-02-13 15:46:55    DEBUG [glance.registry]            deleted_at: None
2012-02-13 15:46:55    DEBUG [glance.registry]           disk_format: ari
2012-02-13 15:46:55    DEBUG [glance.registry]                    id: 28
2012-02-13 15:46:55    DEBUG [glance.registry]             is_public: True
2012-02-13 15:46:55    DEBUG [glance.registry]              location: None
2012-02-13 15:46:55    DEBUG [glance.registry]              min_disk: 0
2012-02-13 15:46:55    DEBUG [glance.registry]               min_ram: 0
2012-02-13 15:46:55    DEBUG [glance.registry]                  name: Ubuntu 11.10 ramdisk
2012-02-13 15:46:55    DEBUG [glance.registry]                 owner: 4
2012-02-13 15:46:55    DEBUG [glance.registry]                  size: 13638383
2012-02-13 15:46:55    DEBUG [glance.registry]                status: saving
2012-02-13 15:46:55    DEBUG [glance.registry]            updated_at: 2012-02-13T21:46:55
2012-02-13 15:46:55    DEBUG [glance.api.v1.images] Uploading image data for image 28 to swift store
2012-02-13 15:46:55    DEBUG [glance.store.swift] Creating Swift connection with (auth_address=http://173.23.181.2:8080/v1.0, user=glance:glance, snet=False)
2012-02-13 15:46:55    DEBUG [root] HTTP PERF: 0.00160 seconds to GET 173.23.181.2:8080 /v1.0)
2012-02-13 15:46:56    DEBUG [root] HTTP PERF: 0.00198 seconds to GET 173.23.181.2:8080 /v1.0)
2012-02-13 15:46:56    ERROR [glance.api.v1.images] Traceback (most recent call last):
  File "/usr/lib/python2.7/dist-packages/glance/api/v1/images.py", line 372, in _upload
    image_size)
  File "/usr/lib/python2.7/dist-packages/glance/store/swift.py", line 321, in add
    create_container_if_missing(self.container, swift_conn, self.options)
  File "/usr/lib/python2.7/dist-packages/glance/store/swift.py", line 478, in create_container_if_missing
    swift_conn.head_container(container)
  File "/usr/lib/python2.7/dist-packages/swift/common/client.py", line 822, in head_container
    return self._retry(None, head_container, container)
  File "/usr/lib/python2.7/dist-packages/swift/common/client.py", line 774, in _retry
    self.url, self.token = self.get_auth()
  File "/usr/lib/python2.7/dist-packages/swift/common/client.py", line 762, in get_auth
    return get_auth(self.authurl, self.user, self.key, snet=self.snet)
  File "/usr/lib/python2.7/dist-packages/swift/common/client.py", line 190, in get_auth
    http_reason=resp.reason)
ClientException: Auth GET failed: http://173.23.181.2:8080/v1.0 401 Unauthorized

2012-02-13 15:46:56    DEBUG [glance.registry] Updating image metadata for image 28...
2012-02-13 15:46:56    DEBUG [glance.registry]                status: killed
2012-02-13 15:46:56    DEBUG [glance.registry] Returned image metadata from call to RegistryClient.update_image():
2012-02-13 15:46:56    DEBUG [glance.registry]              checksum: None
2012-02-13 15:46:56    DEBUG [glance.registry]      container_format: ari
2012-02-13 15:46:56    DEBUG [glance.registry]            created_at: 2012-02-13T21:46:55
2012-02-13 15:46:56    DEBUG [glance.registry]               deleted: False
2012-02-13 15:46:56    DEBUG [glance.registry]            deleted_at: None
2012-02-13 15:46:56    DEBUG [glance.registry]           disk_format: ari
2012-02-13 15:46:56    DEBUG [glance.registry]                    id: 28
2012-02-13 15:46:56    DEBUG [glance.registry]             is_public: True
2012-02-13 15:46:56    DEBUG [glance.registry]              location: None
2012-02-13 15:46:56    DEBUG [glance.registry]              min_disk: 0
2012-02-13 15:46:56    DEBUG [glance.registry]               min_ram: 0
2012-02-13 15:46:56    DEBUG [glance.registry]                  name: Ubuntu 11.10 ramdisk
2012-02-13 15:46:56    DEBUG [glance.registry]                 owner: 4
2012-02-13 15:46:56    DEBUG [glance.registry]                  size: 13638383
2012-02-13 15:46:56    DEBUG [glance.registry]                status: killed
2012-02-13 15:46:56    DEBUG [glance.registry]            updated_at: 2012-02-13T21:46:56
2012-02-13 15:46:56    DEBUG [eventlet.wsgi.server] 127.0.0.1 - - [13/Feb/2012 15:46:56] "POST /v1/images HTTP/1.1" 400 351 1.515732

As always, I'm sure this is just a subtle config error on my part.  Note that in my setup, I've created a separate tenant and user (glance) to be used for image storage.  My keystone auth setups use a separate long lived admin token for authentication.  If needed, I can post my configuration files.

Any insight will be appreciated.  Thanks in advance and regards,

Ross




Follow ups

References