← Back to team overview

yahoo-eng-team team mailing list archive

[Bug 1575661] Re: can not deploy a partition image to Ironic node

 

** Also affects: ironic
   Importance: Undecided
       Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1575661

Title:
  can not deploy a partition image to Ironic node

Status in Ironic:
  New
Status in OpenStack Compute (nova):
  New

Bug description:
  Using fresh master of DevStack, I can not deploy partition images to
  Ironic nodes via Nova.

  I have two images in Glance - kernel image and partition image with
  kernel_id property set.

  I have configured Ironic nodes and nova flavor with capabilities:
  "boot_option: local" as described in [0].

  When I try to boot nova instance with the partition image and the
  configured flavor, instance goes to error:

  $openstack server list
  +--------------------------------------+--------+--------+----------+
  | ID                                   | Name   | Status | Networks |
  +--------------------------------------+--------+--------+----------+
  | 6cde85d2-47ad-446b-9a1f-960dbcca5199 | parted | ERROR  |          |
  +--------------------------------------+--------+--------+----------+

  Instance is assigned to Ironic node but node is not moved to deploying
  state

  $openstack baremetal list
  +--------------------------------------+--------+--------------------------------------+-------------+--------------------+-------------+
  | UUID                                 | Name   | Instance UUID                        | Power State | Provisioning State | Maintenance |
  +--------------------------------------+--------+--------------------------------------+-------------+--------------------+-------------+
  | 95d3353f-61a6-44ba-8485-2881d1138ce1 | node-0 | None                                 | power off   | available          | False       |
  | 48112a56-8f8b-42fc-b143-742cf4856e78 | node-1 | 6cde85d2-47ad-446b-9a1f-960dbcca5199 | power off   | available          | False       |
  | c66a1035-5edf-434b-9d09-39ecc9069e02 | node-2 | None                                 | power off   | available          | False       |
  +--------------------------------------+--------+--------------------------------------+-------------+--------------------+-------------+

  In n-cpu.log I see the following errors:

  2016-04-27 15:26:13.190 ERROR ironicclient.common.http [req-077efca4-1776-443b-bd70-0769c09a0e54 demo demo] Error contacting Ironic server: Instance 6cde85d2-47ad-446b-9a1f-960dbcca5199 is already associated with
   a node, it cannot be associated with this other node c66a1035-5edf-434b-9d09-39ecc9069e02 (HTTP 409). Attempt 2 of 2
  2016-04-27 15:26:13.190 ERROR nova.compute.manager [req-077efca4-1776-443b-bd70-0769c09a0e54 demo demo] [instance: 6cde85d2-47ad-446b-9a1f-960dbcca5199] Instance failed to spawn
  2016-04-27 15:26:13.190 TRACE nova.compute.manager [instance: 6cde85d2-47ad-446b-9a1f-960dbcca5199] Traceback (most recent call last):
  2016-04-27 15:26:13.190 TRACE nova.compute.manager [instance: 6cde85d2-47ad-446b-9a1f-960dbcca5199]   File "/opt/stack/nova/nova/compute/manager.py", line 2209, in _build_resources
  2016-04-27 15:26:13.190 TRACE nova.compute.manager [instance: 6cde85d2-47ad-446b-9a1f-960dbcca5199]     yield resources
  2016-04-27 15:26:13.190 TRACE nova.compute.manager [instance: 6cde85d2-47ad-446b-9a1f-960dbcca5199]   File "/opt/stack/nova/nova/compute/manager.py", line 2055, in _build_and_run_instance
  2016-04-27 15:26:13.190 TRACE nova.compute.manager [instance: 6cde85d2-47ad-446b-9a1f-960dbcca5199]     block_device_info=block_device_info)
  2016-04-27 15:26:13.190 TRACE nova.compute.manager [instance: 6cde85d2-47ad-446b-9a1f-960dbcca5199]   File "/opt/stack/nova/nova/virt/ironic/driver.py", line 698, in spawn
  2016-04-27 15:26:13.190 TRACE nova.compute.manager [instance: 6cde85d2-47ad-446b-9a1f-960dbcca5199]     self._add_driver_fields(node, instance, image_meta, flavor)
  2016-04-27 15:26:13.190 TRACE nova.compute.manager [instance: 6cde85d2-47ad-446b-9a1f-960dbcca5199]   File "/opt/stack/nova/nova/virt/ironic/driver.py", line 366, in _add_driver_fields
  2016-04-27 15:26:13.190 TRACE nova.compute.manager [instance: 6cde85d2-47ad-446b-9a1f-960dbcca5199]     retry_on_conflict=False)
  2016-04-27 15:26:13.190 TRACE nova.compute.manager [instance: 6cde85d2-47ad-446b-9a1f-960dbcca5199]   File "/opt/stack/nova/nova/virt/ironic/client_wrapper.py", line 139, in call
  2016-04-27 15:26:13.190 TRACE nova.compute.manager [instance: 6cde85d2-47ad-446b-9a1f-960dbcca5199]     return self._multi_getattr(client, method)(*args, **kwargs)
  2016-04-27 15:26:13.190 TRACE nova.compute.manager [instance: 6cde85d2-47ad-446b-9a1f-960dbcca5199]   File "/opt/stack/python-ironicclient/ironicclient/v1/node.py", line 198, in update
  2016-04-27 15:26:13.190 TRACE nova.compute.manager [instance: 6cde85d2-47ad-446b-9a1f-960dbcca5199]     method=http_method)
  2016-04-27 15:26:13.190 TRACE nova.compute.manager [instance: 6cde85d2-47ad-446b-9a1f-960dbcca5199]   File "/opt/stack/python-ironicclient/ironicclient/common/base.py", line 171, in _update
  2016-04-27 15:26:13.190 TRACE nova.compute.manager [instance: 6cde85d2-47ad-446b-9a1f-960dbcca5199]     resp, body = self.api.json_request(method, url, body=patch)
  2016-04-27 15:26:13.190 TRACE nova.compute.manager [instance: 6cde85d2-47ad-446b-9a1f-960dbcca5199]   File "/opt/stack/python-ironicclient/ironicclient/common/http.py", line 552, in json_request
  2016-04-27 15:26:13.190 TRACE nova.compute.manager [instance: 6cde85d2-47ad-446b-9a1f-960dbcca5199]     resp = self._http_request(url, method, **kwargs)
  2016-04-27 15:26:13.190 TRACE nova.compute.manager [instance: 6cde85d2-47ad-446b-9a1f-960dbcca5199]   File "/opt/stack/python-ironicclient/ironicclient/common/http.py", line 189, in wrapper
  2016-04-27 15:26:13.190 TRACE nova.compute.manager [instance: 6cde85d2-47ad-446b-9a1f-960dbcca5199]     return func(self, url, method, **kwargs)
  2016-04-27 15:26:13.190 TRACE nova.compute.manager [instance: 6cde85d2-47ad-446b-9a1f-960dbcca5199]   File "/opt/stack/python-ironicclient/ironicclient/common/http.py", line 534, in _http_request
  2016-04-27 15:26:13.190 TRACE nova.compute.manager [instance: 6cde85d2-47ad-446b-9a1f-960dbcca5199]     error_json.get('debuginfo'), method, url)
  2016-04-27 15:26:13.190 TRACE nova.compute.manager [instance: 6cde85d2-47ad-446b-9a1f-960dbcca5199] Conflict: Instance 6cde85d2-47ad-446b-9a1f-960dbcca5199 is already associated with a node, it cannot be associat
  ed with this other node c66a1035-5edf-434b-9d09-39ecc9069e02 (HTTP 409)
  2016-04-27 15:26:13.190 TRACE nova.compute.manager [instance: 6cde85d2-47ad-446b-9a1f-960dbcca5199] 

  In ir-cond.log the error is as follows:

  2016-04-27 15:26:13.183 ERROR oslo_messaging.rpc.dispatcher [req-ec2f0a30-13a8-4029-ac0b-e2852c0c67c9 None None] Exception during message handling: Instance 6cde85d2-47ad-446b-9a1f-960dbcca5199 is already associa
  ted with a node, it cannot be associated with this other node c66a1035-5edf-434b-9d09-39ecc9069e02
  2016-04-27 15:26:13.183 TRACE oslo_messaging.rpc.dispatcher Traceback (most recent call last):
  2016-04-27 15:26:13.183 TRACE oslo_messaging.rpc.dispatcher   File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 138, in _dispatch_and_reply
  2016-04-27 15:26:13.183 TRACE oslo_messaging.rpc.dispatcher     incoming.message))
  2016-04-27 15:26:13.183 TRACE oslo_messaging.rpc.dispatcher   File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 185, in _dispatch
  2016-04-27 15:26:13.183 TRACE oslo_messaging.rpc.dispatcher     return self._do_dispatch(endpoint, method, ctxt, args)
  2016-04-27 15:26:13.183 TRACE oslo_messaging.rpc.dispatcher   File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 127, in _do_dispatch
  2016-04-27 15:26:13.183 TRACE oslo_messaging.rpc.dispatcher     result = func(ctxt, **new_args)
  2016-04-27 15:26:13.183 TRACE oslo_messaging.rpc.dispatcher   File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 150, in inner
  2016-04-27 15:26:13.183 TRACE oslo_messaging.rpc.dispatcher     return func(*args, **kwargs)
  2016-04-27 15:26:13.183 TRACE oslo_messaging.rpc.dispatcher   File "/opt/stack/ironic/ironic/conductor/manager.py", line 228, in update_node
  2016-04-27 15:26:13.183 TRACE oslo_messaging.rpc.dispatcher     node_obj.save()
  2016-04-27 15:26:13.183 TRACE oslo_messaging.rpc.dispatcher   File "/opt/stack/ironic/ironic/objects/node.py", line 340, in save
  2016-04-27 15:26:13.183 TRACE oslo_messaging.rpc.dispatcher     self.dbapi.update_node(self.uuid, updates)
  2016-04-27 15:26:13.183 TRACE oslo_messaging.rpc.dispatcher   File "/opt/stack/ironic/ironic/db/sqlalchemy/api.py", line 399, in update_node
  2016-04-27 15:26:13.183 TRACE oslo_messaging.rpc.dispatcher     node=node_id)
  2016-04-27 15:26:13.183 TRACE oslo_messaging.rpc.dispatcher InstanceAssociated: Instance 6cde85d2-47ad-446b-9a1f-960dbcca5199 is already associated with a node, it cannot be associated with this other node c66a10
  35-5edf-434b-9d09-39ecc9069e02

  What's more, when I delete the failed server from Nova, Ironic node is
  left with orphaned instance assignment, which only can be deleted with
  node-update removing instance_uuid.

  [0] http://docs.openstack.org/developer/ironic/deploy/install-
  guide.html?highlight=local%20boot#enabling-local-boot-with-compute-
  service

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1575661/+subscriptions


References