← Back to team overview

openstack team mailing list archive

Re: glance_api_servers vs. glance_host vs. keystone?

 

Hey Lars,

Sadly I don't have much in the way of a solution for you, but I do have
some suggestions.  Comments inline.

On 6/15/12 4:17 PM, "Lars Kellogg-Stedman" <lars@xxxxxxxxxxxxxxxx> wrote:

>Howdy all,
>
>I've spent the past few days slogging through the initial steps of
>getting OpenStack and running and I seem to have hit a wall.  If this
>isn't the right list for this question, please feel free to direct me
>elsewhere.
>
>I have two systems running the OpenStack components right now.  The
>"controller" runs nova-api, nova-volume, nova-objectstore, glance, and
>keystone.  The "compute host" runs nova-compute.  All of the parts seemt
>to
>talk to each other successfully.  For example, I can run 'nova
>image-list' on either system and get a list of available images:
>
># nova image-list
>+--------------------------------------+------------+--------+--------+
>|                  ID                  |    Name    | Status | Server |
>+--------------------------------------+------------+--------+--------+
>| 383bbfab-01db-4089-a8fd-1a2735040af5 | DSL 4.4.10 | ACTIVE |        |
>+--------------------------------------+------------+--------+--------+
>
>When I try to deploy a new guest using the 'nova boot' command:
>
># nova boot --flavor m1.small  --image
>383bbfab-01db-4089-a8fd-1a2735040af5 lars0
>
>The guest ends up permanently stuck in the BUILD state:
>
># nova list
>+--------------------------------------+-------+--------+----------+
>|                  ID                  |  Name | Status | Networks |
>+--------------------------------------+-------+--------+----------+
>| 06b343e6-bc6b-4e0b-baed-cda55cb85695 | lars0 | BUILD  |          |
>+--------------------------------------+-------+--------+----------+
>
>This is a surprisingly permanently condition.  The server will never move
>out
>of the BUILD state, and it's not possible to delete this using
>'nova delete', either.
>
>Looking at /var/log/nova/compute.log on the compute host, I don't see
>anything
>specific.  I do see this:
>
>ERROR nova.rpc.impl_qpid [-] Timed out waiting for RPC response: None
>TRACE nova.rpc.impl_qpid Traceback (most recent call last):
>TRACE nova.rpc.impl_qpid   File
>"/usr/lib/python2.6/site-packages/nova/rpc/impl_qpid.py", line 359, in
>ensure
>TRACE nova.rpc.impl_qpid     return method(*args, **kwargs)
>TRACE nova.rpc.impl_qpid   File
>"/usr/lib/python2.6/site-packages/nova/rpc/impl_qpid.py", line 408, in
>_consume
>TRACE nova.rpc.impl_qpid     nxt_receiver =
>self.session.next_receiver(timeout=timeout)
>TRACE nova.rpc.impl_qpid   File "<string>", line 6, in next_receiver
>TRACE nova.rpc.impl_qpid   File
>"/usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py", line 651,
>in next_receiver
>TRACE nova.rpc.impl_qpid     raise Empty
>TRACE nova.rpc.impl_qpid Empty: None

I'm used to using rabbit, but I did notice you didn't include a
nova-scheduler in your list above, and this message seems to be saying it
can't find an endpoint for qpidŠpossibly related?  Again, I know nothing
about qpid, but is there some way to see if the message is landing hitting
qpid and getting stuck there?

>
>On the controller, I'm seeing a lot of this in /var/log/nova/api.log:
>
>DEBUG nova.api.openstack.wsgi [req-5e4dc971-cb14-469a-9239-080a8c551b65
>  22bb8e502d3944ad953e72fc77879c2f 76e2726cacca4be0bde6d8840f88c136]
>Unrecognized
>  Content-Type provided in request from (pid=1044) get_body
>  /usr/lib/python2.6/site-packages/nova/api/openstack/wsgi.py:697

We should probably figure out if anyone actually cares about this.  It
litters our logs, but seems to have no effect on anything.  In any case
you can ignore this error.

One final piece of info that would be interesting to know is the vm_state
and task_state from the db for the instances stuck in build.  That would
let us know just how far the instance got in the building process.  My
guess is that it is stuck in "scheduling".

Gabe

>
>None of the command-line tools are producing any kind of visible
>error.  I'm not really sure where I should be looking for problems at
>this point.
>
>Following is TMI:
>
>I'm running the Essex release under CentOS 6.2:
>
># rpm -qa 'openstack*'
>openstack-utils-2012.1-1.el6.noarch
>openstack-glance-2012.1-5.el6.noarch
>openstack-dashboard-2012.1-4.el6.noarch
>openstack-quantum-2012.1-5.el6.noarch
>openstack-nova-2012.1-11.el6.noarch
>openstack-keystone-2012.1-3.el6.noarch
>openstack-swift-1.4.8-2.el6.noarch
>openstack-quantum-linuxbridge-2012.1-5.el6.noarch
>
>And the available endpoints are:
>
>+-------------+-----------------------------------------------------------
>-------------------------+
>|     nova    |                                       Value
>                         |
>+-------------+-----------------------------------------------------------
>-------------------------+
>| adminURL    | 
>http://os-controller.int.seas.harvard.edu:8774/v2/76e2726cacca4be0bde6d884
>0f88c136 |
>| internalURL | 
>http://os-controller.int.seas.harvard.edu:8774/v2/76e2726cacca4be0bde6d884
>0f88c136 |
>| publicURL   | 
>http://os-controller.int.seas.harvard.edu:8774/v2/76e2726cacca4be0bde6d884
>0f88c136 |
>| region      | SEAS
>                         |
>| serviceName | nova
>                         |
>+-------------+-----------------------------------------------------------
>-------------------------+
>+-------------+---------------------------------------------------+
>|    glance   |                       Value                       |
>+-------------+---------------------------------------------------+
>| adminURL    | http://os-controller.int.seas.harvard.edu:9292/v1 |
>| internalURL | http://os-controller.int.seas.harvard.edu:9292/v1 |
>| publicURL   | http://os-controller.int.seas.harvard.edu:9292/v1 |
>| region      | SEAS                                              |
>+-------------+---------------------------------------------------+
>+-------------+-----------------------------------------------------------
>-------------------------+
>|    volume   |                                       Value
>                         |
>+-------------+-----------------------------------------------------------
>-------------------------+
>| adminURL    | 
>http://os-controller.int.seas.harvard.edu:8776/v1/76e2726cacca4be0bde6d884
>0f88c136 |
>| internalURL | 
>http://os-controller.int.seas.harvard.edu:8776/v1/76e2726cacca4be0bde6d884
>0f88c136 |
>| publicURL   | 
>http://os-controller.int.seas.harvard.edu:8776/v1/76e2726cacca4be0bde6d884
>0f88c136 |
>| region      | SEAS
>                         |
>+-------------+-----------------------------------------------------------
>-------------------------+
>+-------------+-----------------------------------------------------------
>----+
>|     ec2     |                             Value
>    |
>+-------------+-----------------------------------------------------------
>----+
>| adminURL    | 
>http://os-controller.int.seas.harvard.edu:8773/services/Admin |
>| internalURL | 
>http://os-controller.int.seas.harvard.edu:8773/services/Cloud |
>| publicURL   | 
>http://os-controller.int.seas.harvard.edu:8773/services/Cloud |
>| region      | SEAS
>    |
>+-------------+-----------------------------------------------------------
>----+
>+-------------+-----------------------------------------------------------
>------------------------------+
>|    swift    |                                          Value
>                              |
>+-------------+-----------------------------------------------------------
>------------------------------+
>| adminURL    | http://os-controller.int.seas.harvard.edu:8888/
>                              |
>| internalURL | 
>http://os-controller.int.seas.harvard.edu:8888/v1/AUTH_76e2726cacca4be0bde
>6d8840f88c136 |
>| publicURL   | 
>http://os-controller.int.seas.harvard.edu:8888/v1/AUTH_76e2726cacca4be0bde
>6d8840f88c136 |
>| region      | SEAS
>                              |
>+-------------+-----------------------------------------------------------
>------------------------------+
>+-------------+-------------------------------------------------------+
>|   keystone  |                         Value                         |
>+-------------+-------------------------------------------------------+
>| adminURL    | http://os-controller.int.seas.harvard.edu:35357/v2.0/ |
>| internalURL | http://os-controller.int.seas.harvard.edu:5000/v2.0/  |
>| publicURL   | http://os-controller.int.seas.harvard.edu:5000/v2.0/  |
>| region      | SEAS                                                  |
>+-------------+-------------------------------------------------------+
>
>-- 
>Lars Kellogg-Stedman <lars@xxxxxxxxxxxxxxxx>       |
>Senior Technologist                                |
>http://ac.seas.harvard.edu/
>Academic Computing                                 |
>http://code.seas.harvard.edu/
>Harvard School of Engineering and Applied Sciences |
>
>
>_______________________________________________
>Mailing list: https://launchpad.net/~openstack
>Post to     : openstack@xxxxxxxxxxxxxxxxxxx
>Unsubscribe : https://launchpad.net/~openstack
>More help   : https://help.launchpad.net/ListHelp



Follow ups

References