← Back to team overview

yahoo-eng-team team mailing list archive

[Bug 1410739] [NEW] Networks are not cleaned up when create fails

 

Public bug reported:

While attempting to create a VM in an environment running with Neutron
networking a failure was encountered getting the image from glance. At
that point the networking has been allocated. The clean up code does not
deallocate the networks. This behavior seems to be deliberately
introduced in https://review.openstack.org/#/c/101335 which is a fix for
https://bugs.launchpad.net/nova/+bug/1332198

The result of the failure was to reschedule the instance to another
host. When the instance is created on the second host the ports are
already bound to the first so they can not be bound to that host.

The original error from glance was:

2015-01-08 22:39:14.251 14855 TRACE nova.compute.manager [instance: 25263c4a-ade3-41f1-9c38-2a8812cdbbc8] Traceback (most recent call last):
2015-01-08 22:39:14.251 14855 TRACE nova.compute.manager [instance: 25263c4a-ade3-41f1-9c38-2a8812cdbbc8]   File "/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/compute/manager.py", line 2231, in _build_resources
2015-01-08 22:39:14.251 14855 TRACE nova.compute.manager [instance: 25263c4a-ade3-41f1-9c38-2a8812cdbbc8]     yield resources
2015-01-08 22:39:14.251 14855 TRACE nova.compute.manager [instance: 25263c4a-ade3-41f1-9c38-2a8812cdbbc8]   File "/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/compute/manager.py", line 2101, in _build_and_run_instance
2015-01-08 22:39:14.251 14855 TRACE nova.compute.manager [instance: 25263c4a-ade3-41f1-9c38-2a8812cdbbc8]     block_device_info=block_device_info)
2015-01-08 22:39:14.251 14855 TRACE nova.compute.manager [instance: 25263c4a-ade3-41f1-9c38-2a8812cdbbc8]   File "/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2616, in spawn
2015-01-08 22:39:14.251 14855 TRACE nova.compute.manager [instance: 25263c4a-ade3-41f1-9c38-2a8812cdbbc8]     admin_pass=admin_password)
2015-01-08 22:39:14.251 14855 TRACE nova.compute.manager [instance: 25263c4a-ade3-41f1-9c38-2a8812cdbbc8]   File "/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 3018, in _create_image
2015-01-08 22:39:14.251 14855 TRACE nova.compute.manager [instance: 25263c4a-ade3-41f1-9c38-2a8812cdbbc8]     project_id=instance['project_id'])
2015-01-08 22:39:14.251 14855 TRACE nova.compute.manager [instance: 25263c4a-ade3-41f1-9c38-2a8812cdbbc8]   File "/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/virt/libvirt/imagebackend.py", line 208, in cache
2015-01-08 22:39:14.251 14855 TRACE nova.compute.manager [instance: 25263c4a-ade3-41f1-9c38-2a8812cdbbc8]     *args, **kwargs)
2015-01-08 22:39:14.251 14855 TRACE nova.compute.manager [instance: 25263c4a-ade3-41f1-9c38-2a8812cdbbc8]   File "/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/virt/libvirt/imagebackend.py", line 451, in create_image
2015-01-08 22:39:14.251 14855 TRACE nova.compute.manager [instance: 25263c4a-ade3-41f1-9c38-2a8812cdbbc8]     prepare_template(target=base, max_size=size, *args, **kwargs)
2015-01-08 22:39:14.251 14855 TRACE nova.compute.manager [instance: 25263c4a-ade3-41f1-9c38-2a8812cdbbc8]   File "/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/openstack/common/lockutils.py", line 272, in inner
2015-01-08 22:39:14.251 14855 TRACE nova.compute.manager [instance: 25263c4a-ade3-41f1-9c38-2a8812cdbbc8]     return f(*args, **kwargs)
2015-01-08 22:39:14.251 14855 TRACE nova.compute.manager [instance: 25263c4a-ade3-41f1-9c38-2a8812cdbbc8]   File "/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/virt/libvirt/imagebackend.py", line 198, in fetch_func_sync
2015-01-08 22:39:14.251 14855 TRACE nova.compute.manager [instance: 25263c4a-ade3-41f1-9c38-2a8812cdbbc8]     fetch_func(target=target, *args, **kwargs)
2015-01-08 22:39:14.251 14855 TRACE nova.compute.manager [instance: 25263c4a-ade3-41f1-9c38-2a8812cdbbc8]   File "/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/virt/libvirt/utils.py", line 452, in fetch_image
2015-01-08 22:39:14.251 14855 TRACE nova.compute.manager [instance: 25263c4a-ade3-41f1-9c38-2a8812cdbbc8]     max_size=max_size)
2015-01-08 22:39:14.251 14855 TRACE nova.compute.manager [instance: 25263c4a-ade3-41f1-9c38-2a8812cdbbc8]   File "/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/virt/images.py", line 79, in fetch_to_raw
2015-01-08 22:39:14.251 14855 TRACE nova.compute.manager [instance: 25263c4a-ade3-41f1-9c38-2a8812cdbbc8]     max_size=max_size)
2015-01-08 22:39:14.251 14855 TRACE nova.compute.manager [instance: 25263c4a-ade3-41f1-9c38-2a8812cdbbc8]   File "/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/virt/images.py", line 73, in fetch
2015-01-08 22:39:14.251 14855 TRACE nova.compute.manager [instance: 25263c4a-ade3-41f1-9c38-2a8812cdbbc8]     IMAGE_API.download(context, image_href, dest_path=path)
2015-01-08 22:39:14.251 14855 TRACE nova.compute.manager [instance: 25263c4a-ade3-41f1-9c38-2a8812cdbbc8]   File "/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/image/api.py", line 178, in download
2015-01-08 22:39:14.251 14855 TRACE nova.compute.manager [instance: 25263c4a-ade3-41f1-9c38-2a8812cdbbc8]     dst_path=dest_path)
2015-01-08 22:39:14.251 14855 TRACE nova.compute.manager [instance: 25263c4a-ade3-41f1-9c38-2a8812cdbbc8]   File "/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/image/glance.py", line 359, in download
2015-01-08 22:39:14.251 14855 TRACE nova.compute.manager [instance: 25263c4a-ade3-41f1-9c38-2a8812cdbbc8]     _reraise_translated_image_exception(image_id)
2015-01-08 22:39:14.251 14855 TRACE nova.compute.manager [instance: 25263c4a-ade3-41f1-9c38-2a8812cdbbc8]   File "/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/image/glance.py", line 357, in download
2015-01-08 22:39:14.251 14855 TRACE nova.compute.manager [instance: 25263c4a-ade3-41f1-9c38-2a8812cdbbc8]     image_chunks = self._client.call(context, 1, 'data', image_id)
2015-01-08 22:39:14.251 14855 TRACE nova.compute.manager [instance: 25263c4a-ade3-41f1-9c38-2a8812cdbbc8]   File "/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/image/glance.py", line 232, in call
2015-01-08 22:39:14.251 14855 TRACE nova.compute.manager [instance: 25263c4a-ade3-41f1-9c38-2a8812cdbbc8]     return getattr(client.images, method)(*args, **kwargs)
2015-01-08 22:39:14.251 14855 TRACE nova.compute.manager [instance: 25263c4a-ade3-41f1-9c38-2a8812cdbbc8]   File "/opt/stack/venvs/nova/local/lib/python2.7/site-packages/glanceclient/v1/images.py", line 142, in data
2015-01-08 22:39:14.251 14855 TRACE nova.compute.manager [instance: 25263c4a-ade3-41f1-9c38-2a8812cdbbc8]     % urlparse.quote(str(image_id)))
2015-01-08 22:39:14.251 14855 TRACE nova.compute.manager [instance: 25263c4a-ade3-41f1-9c38-2a8812cdbbc8]   File "/opt/stack/venvs/nova/local/lib/python2.7/site-packages/glanceclient/common/http.py", line 253, in get
2015-01-08 22:39:14.251 14855 TRACE nova.compute.manager [instance: 25263c4a-ade3-41f1-9c38-2a8812cdbbc8]     return self._request('GET', url, **kwargs)
2015-01-08 22:39:14.251 14855 TRACE nova.compute.manager [instance: 25263c4a-ade3-41f1-9c38-2a8812cdbbc8]   File "/opt/stack/venvs/nova/local/lib/python2.7/site-packages/glanceclient/common/http.py", line 221, in _request
2015-01-08 22:39:14.251 14855 TRACE nova.compute.manager [instance: 25263c4a-ade3-41f1-9c38-2a8812cdbbc8]     raise exc.from_response(resp, resp.content)
2015-01-08 22:39:14.251 14855 TRACE nova.compute.manager [instance: 25263c4a-ade3-41f1-9c38-2a8812cdbbc8] HTTPInternalServerError: HTTPInternalServerError (HTTP 500)


The nova-compute log showing the clean up is as follows (some lines to do with messaging etc have been removed):

2015-01-08 22:39:15 14855 AUDIT nova.compute.manager [req-
fe4615a4-8aa0-4bc5-a377-6530da5a7468 d3c81059cc6e46a7b53c33c0f1d1bc92
76aac8b81d664000a08cbdec3fa52e5a] [instance: 25263c4a-
ade3-41f1-9c38-2a8812cdbbc8] Terminating instance

2015-01-08 22:39:15.361 14855 WARNING nova.virt.libvirt.driver [-]
[instance: 25263c4a-ade3-41f1-9c38-2a8812cdbbc8] During wait destroy,
instance disappeared.

2015-01-08 22:39:15 14855 DEBUG nova.objects.instance [req-fe4615a4-8aa0-4bc5-a377-6530da5a7468 d3c81059cc6e46a7b53c33c0f1d1bc92 76aac8b81d664000a08cbdec3fa52e5a] Lazy-loading `system_metadata' on Instance uuid 25263c4a-ade3-41f1-9c38-2a8812c
dbbc8 obj_load_attr /opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/objects/instance.py:572

2015-01-08 22:39:15 14855 DEBUG nova.openstack.common.processutils [req-fe4615a4-8aa0-4bc5-a377-6530da5a7468 d3c81059cc6e46a7b53c33c0f1d1bc92 76aac8b81d664000a08cbdec3fa52e5a] Running cmd (subprocess): mv /mnt/state/var/lib/nova/instances/252
63c4a-ade3-41f1-9c38-2a8812cdbbc8 /mnt/state/var/lib/nova/instances/25263c4a-ade3-41f1-9c38-2a8812cdbbc8_del execute /opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/openstack/common/processutils.py:161
2015-01-08 22:39:15 14855 DEBUG nova.openstack.common.processutils [req-fe4615a4-8aa0-4bc5-a377-6530da5a7468 d3c81059cc6e46a7b53c33c0f1d1bc92 76aac8b81d664000a08cbdec3fa52e5a] Result was 0 execute /opt/stack/venvs/nova/local/lib/python2.7/sit
e-packages/nova/openstack/common/processutils.py:195
2015-01-08 22:39:15 14855 INFO nova.virt.libvirt.driver [req-fe4615a4-8aa0-4bc5-a377-6530da5a7468 d3c81059cc6e46a7b53c33c0f1d1bc92 76aac8b81d664000a08cbdec3fa52e5a] [instance: 25263c4a-ade3-41f1-9c38-2a8812cdbbc8] Deleting instance files /mnt
/state/var/lib/nova/instances/25263c4a-ade3-41f1-9c38-2a8812cdbbc8_del
2015-01-08 22:39:15 14855 INFO nova.virt.libvirt.driver [req-fe4615a4-8aa0-4bc5-a377-6530da5a7468 d3c81059cc6e46a7b53c33c0f1d1bc92 76aac8b81d664000a08cbdec3fa52e5a] [instance: 25263c4a-ade3-41f1-9c38-2a8812cdbbc8] Deletion of /mnt/state/var/l
ib/nova/instances/25263c4a-ade3-41f1-9c38-2a8812cdbbc8_del complete


2015-01-08 22:39:15 14855 DEBUG nova.compute.claims [req-fe4615a4-8aa0-4bc5-a377-6530da5a7468 d3c81059cc6e46a7b53c33c0f1d1bc92 76aac8b81d664000a08cbdec3fa52e5a] [instance: 25263c4a-ade3-41f1-9c38-2a8812cdbbc8] Aborting claim: [Claim: 512 MB memory, 2 GB disk] abort /opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/compute/claims.py:127
2015-01-08 22:39:15 14855 DEBUG nova.openstack.common.lockutils [req-fe4615a4-8aa0-4bc5-a377-6530da5a7468 d3c81059cc6e46a7b53c33c0f1d1bc92 76aac8b81d664000a08cbdec3fa52e5a] Created new semaphore "compute_resources" internal_lock /opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/openstack/common/lockutils.py:206
2015-01-08 22:39:15 14855 DEBUG nova.openstack.common.lockutils [req-fe4615a4-8aa0-4bc5-a377-6530da5a7468 d3c81059cc6e46a7b53c33c0f1d1bc92 76aac8b81d664000a08cbdec3fa52e5a] Acquired semaphore "compute_resources" lock /opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/openstack/common/lockutils.py:229
2015-01-08 22:39:15 14855 DEBUG nova.openstack.common.lockutils [req-fe4615a4-8aa0-4bc5-a377-6530da5a7468 d3c81059cc6e46a7b53c33c0f1d1bc92 76aac8b81d664000a08cbdec3fa52e5a] Got semaphore / lock "abort_instance_claim" inner /opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/openstack/common/lockutils.py:271

2015-01-08 22:39:15 14855 INFO nova.scheduler.client.report [req-fe4615a4-8aa0-4bc5-a377-6530da5a7468 d3c81059cc6e46a7b53c33c0f1d1bc92 76aac8b81d664000a08cbdec3fa52e5a] Compute_service record updated for ('overcloud-ce-novacompute5-novacompute5-73kkw7k6ose3', 'overcloud-ce-novacompute5-novacompute5-73kkw7k6ose3.novalocal')
2015-01-08 22:39:15 14855 DEBUG nova.openstack.common.lockutils [req-fe4615a4-8aa0-4bc5-a377-6530da5a7468 d3c81059cc6e46a7b53c33c0f1d1bc92 76aac8b81d664000a08cbdec3fa52e5a] Releasing semaphore "compute_resources" lock /opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/openstack/common/lockutils.py:238
2015-01-08 22:39:15 14855 DEBUG nova.openstack.common.lockutils [req-fe4615a4-8aa0-4bc5-a377-6530da5a7468 d3c81059cc6e46a7b53c33c0f1d1bc92 76aac8b81d664000a08cbdec3fa52e5a] Semaphore / lock released "abort_instance_claim" inner /opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/openstack/common/lockutils.py:275
2015-01-08 22:39:15 14855 DEBUG nova.compute.utils [req-fe4615a4-8aa0-4bc5-a377-6530da5a7468 d3c81059cc6e46a7b53c33c0f1d1bc92 76aac8b81d664000a08cbdec3fa52e5a] [instance: 25263c4a-ade3-41f1-9c38-2a8812cdbbc8] HTTPInternalServerError (HTTP 500) notify_about_instance_usage /opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/compute/utils.py:307

2015-01-08 22:39:15 14855 DEBUG nova.compute.manager [req-
fe4615a4-8aa0-4bc5-a377-6530da5a7468 d3c81059cc6e46a7b53c33c0f1d1bc92
76aac8b81d664000a08cbdec3fa52e5a] [instance: 25263c4a-
ade3-41f1-9c38-2a8812cdbbc8] Build of instance 25263c4a-
ade3-41f1-9c38-2a8812cdbbc8 was re-scheduled: HTTPInternalServerError
(HTTP 500) do_build_and_run_instance
/opt/stack/venvs/nova/local/lib/python2.7/site-
packages/nova/compute/manager.py:2016

After the clean up the listing the ports shows:

root@my-server:# neutron port-show b4bbd25a-419e-4d0b-87f1-1786b004b441
+-----------------------+-------------------------------------------------------------------------------------+
| Field                 | Value                                                                               |
+-----------------------+-------------------------------------------------------------------------------------+
| admin_state_up        | True                                                                                |
| allowed_address_pairs |                                                                                     |
| binding:host_id       | novacompute5-73kkw7k6ose3                                                           |
| binding:profile       | {}                                                                                  |
| binding:vif_details   | {"port_filter": true, "ovs_hybrid_plug": true}                                      |
| binding:vif_type      | ovs                                                                                 |
| binding:vnic_type     | normal                                                                              |
| device_id             | 25263c4a-ade3-41f1-9c38-2a8812cdbbc8                                                |
| device_owner          | compute:None                                                                        |
| extra_dhcp_opts       |                                                                                     |
| fixed_ips             | {"subnet_id": "f88386ff-3a5d-41be-b84b-3076100cdcf0", "ip_address": "71.181.229.3"} |
| id                    | b4bbd25a-419e-4d0b-87f1-1786b004b441                                                |
| mac_address           | fa:16:3e:9a:33:d9                                                                   |
| name                  |                                                                                     |
| network_id            | 1ec2a8dd-19db-4ac9-926f-79d501546915                                                |
| security_groups       | 973724f5-df43-4b29-a6ef-ef0619a05f49                                                |
| status                | BUILD                                                                               |
| tenant_id             | 76aac8b81d664000a08cbdec3fa52e5a                                                    |
+-----------------------+-------------------------------------------------------------------------------------+

** Affects: nova
     Importance: Undecided
         Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1410739

Title:
  Networks are not cleaned up when create fails

Status in OpenStack Compute (Nova):
  New

Bug description:
  While attempting to create a VM in an environment running with Neutron
  networking a failure was encountered getting the image from glance. At
  that point the networking has been allocated. The clean up code does
  not deallocate the networks. This behavior seems to be deliberately
  introduced in https://review.openstack.org/#/c/101335 which is a fix
  for https://bugs.launchpad.net/nova/+bug/1332198

  The result of the failure was to reschedule the instance to another
  host. When the instance is created on the second host the ports are
  already bound to the first so they can not be bound to that host.

  The original error from glance was:

  2015-01-08 22:39:14.251 14855 TRACE nova.compute.manager [instance: 25263c4a-ade3-41f1-9c38-2a8812cdbbc8] Traceback (most recent call last):
  2015-01-08 22:39:14.251 14855 TRACE nova.compute.manager [instance: 25263c4a-ade3-41f1-9c38-2a8812cdbbc8]   File "/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/compute/manager.py", line 2231, in _build_resources
  2015-01-08 22:39:14.251 14855 TRACE nova.compute.manager [instance: 25263c4a-ade3-41f1-9c38-2a8812cdbbc8]     yield resources
  2015-01-08 22:39:14.251 14855 TRACE nova.compute.manager [instance: 25263c4a-ade3-41f1-9c38-2a8812cdbbc8]   File "/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/compute/manager.py", line 2101, in _build_and_run_instance
  2015-01-08 22:39:14.251 14855 TRACE nova.compute.manager [instance: 25263c4a-ade3-41f1-9c38-2a8812cdbbc8]     block_device_info=block_device_info)
  2015-01-08 22:39:14.251 14855 TRACE nova.compute.manager [instance: 25263c4a-ade3-41f1-9c38-2a8812cdbbc8]   File "/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2616, in spawn
  2015-01-08 22:39:14.251 14855 TRACE nova.compute.manager [instance: 25263c4a-ade3-41f1-9c38-2a8812cdbbc8]     admin_pass=admin_password)
  2015-01-08 22:39:14.251 14855 TRACE nova.compute.manager [instance: 25263c4a-ade3-41f1-9c38-2a8812cdbbc8]   File "/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 3018, in _create_image
  2015-01-08 22:39:14.251 14855 TRACE nova.compute.manager [instance: 25263c4a-ade3-41f1-9c38-2a8812cdbbc8]     project_id=instance['project_id'])
  2015-01-08 22:39:14.251 14855 TRACE nova.compute.manager [instance: 25263c4a-ade3-41f1-9c38-2a8812cdbbc8]   File "/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/virt/libvirt/imagebackend.py", line 208, in cache
  2015-01-08 22:39:14.251 14855 TRACE nova.compute.manager [instance: 25263c4a-ade3-41f1-9c38-2a8812cdbbc8]     *args, **kwargs)
  2015-01-08 22:39:14.251 14855 TRACE nova.compute.manager [instance: 25263c4a-ade3-41f1-9c38-2a8812cdbbc8]   File "/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/virt/libvirt/imagebackend.py", line 451, in create_image
  2015-01-08 22:39:14.251 14855 TRACE nova.compute.manager [instance: 25263c4a-ade3-41f1-9c38-2a8812cdbbc8]     prepare_template(target=base, max_size=size, *args, **kwargs)
  2015-01-08 22:39:14.251 14855 TRACE nova.compute.manager [instance: 25263c4a-ade3-41f1-9c38-2a8812cdbbc8]   File "/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/openstack/common/lockutils.py", line 272, in inner
  2015-01-08 22:39:14.251 14855 TRACE nova.compute.manager [instance: 25263c4a-ade3-41f1-9c38-2a8812cdbbc8]     return f(*args, **kwargs)
  2015-01-08 22:39:14.251 14855 TRACE nova.compute.manager [instance: 25263c4a-ade3-41f1-9c38-2a8812cdbbc8]   File "/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/virt/libvirt/imagebackend.py", line 198, in fetch_func_sync
  2015-01-08 22:39:14.251 14855 TRACE nova.compute.manager [instance: 25263c4a-ade3-41f1-9c38-2a8812cdbbc8]     fetch_func(target=target, *args, **kwargs)
  2015-01-08 22:39:14.251 14855 TRACE nova.compute.manager [instance: 25263c4a-ade3-41f1-9c38-2a8812cdbbc8]   File "/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/virt/libvirt/utils.py", line 452, in fetch_image
  2015-01-08 22:39:14.251 14855 TRACE nova.compute.manager [instance: 25263c4a-ade3-41f1-9c38-2a8812cdbbc8]     max_size=max_size)
  2015-01-08 22:39:14.251 14855 TRACE nova.compute.manager [instance: 25263c4a-ade3-41f1-9c38-2a8812cdbbc8]   File "/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/virt/images.py", line 79, in fetch_to_raw
  2015-01-08 22:39:14.251 14855 TRACE nova.compute.manager [instance: 25263c4a-ade3-41f1-9c38-2a8812cdbbc8]     max_size=max_size)
  2015-01-08 22:39:14.251 14855 TRACE nova.compute.manager [instance: 25263c4a-ade3-41f1-9c38-2a8812cdbbc8]   File "/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/virt/images.py", line 73, in fetch
  2015-01-08 22:39:14.251 14855 TRACE nova.compute.manager [instance: 25263c4a-ade3-41f1-9c38-2a8812cdbbc8]     IMAGE_API.download(context, image_href, dest_path=path)
  2015-01-08 22:39:14.251 14855 TRACE nova.compute.manager [instance: 25263c4a-ade3-41f1-9c38-2a8812cdbbc8]   File "/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/image/api.py", line 178, in download
  2015-01-08 22:39:14.251 14855 TRACE nova.compute.manager [instance: 25263c4a-ade3-41f1-9c38-2a8812cdbbc8]     dst_path=dest_path)
  2015-01-08 22:39:14.251 14855 TRACE nova.compute.manager [instance: 25263c4a-ade3-41f1-9c38-2a8812cdbbc8]   File "/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/image/glance.py", line 359, in download
  2015-01-08 22:39:14.251 14855 TRACE nova.compute.manager [instance: 25263c4a-ade3-41f1-9c38-2a8812cdbbc8]     _reraise_translated_image_exception(image_id)
  2015-01-08 22:39:14.251 14855 TRACE nova.compute.manager [instance: 25263c4a-ade3-41f1-9c38-2a8812cdbbc8]   File "/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/image/glance.py", line 357, in download
  2015-01-08 22:39:14.251 14855 TRACE nova.compute.manager [instance: 25263c4a-ade3-41f1-9c38-2a8812cdbbc8]     image_chunks = self._client.call(context, 1, 'data', image_id)
  2015-01-08 22:39:14.251 14855 TRACE nova.compute.manager [instance: 25263c4a-ade3-41f1-9c38-2a8812cdbbc8]   File "/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/image/glance.py", line 232, in call
  2015-01-08 22:39:14.251 14855 TRACE nova.compute.manager [instance: 25263c4a-ade3-41f1-9c38-2a8812cdbbc8]     return getattr(client.images, method)(*args, **kwargs)
  2015-01-08 22:39:14.251 14855 TRACE nova.compute.manager [instance: 25263c4a-ade3-41f1-9c38-2a8812cdbbc8]   File "/opt/stack/venvs/nova/local/lib/python2.7/site-packages/glanceclient/v1/images.py", line 142, in data
  2015-01-08 22:39:14.251 14855 TRACE nova.compute.manager [instance: 25263c4a-ade3-41f1-9c38-2a8812cdbbc8]     % urlparse.quote(str(image_id)))
  2015-01-08 22:39:14.251 14855 TRACE nova.compute.manager [instance: 25263c4a-ade3-41f1-9c38-2a8812cdbbc8]   File "/opt/stack/venvs/nova/local/lib/python2.7/site-packages/glanceclient/common/http.py", line 253, in get
  2015-01-08 22:39:14.251 14855 TRACE nova.compute.manager [instance: 25263c4a-ade3-41f1-9c38-2a8812cdbbc8]     return self._request('GET', url, **kwargs)
  2015-01-08 22:39:14.251 14855 TRACE nova.compute.manager [instance: 25263c4a-ade3-41f1-9c38-2a8812cdbbc8]   File "/opt/stack/venvs/nova/local/lib/python2.7/site-packages/glanceclient/common/http.py", line 221, in _request
  2015-01-08 22:39:14.251 14855 TRACE nova.compute.manager [instance: 25263c4a-ade3-41f1-9c38-2a8812cdbbc8]     raise exc.from_response(resp, resp.content)
  2015-01-08 22:39:14.251 14855 TRACE nova.compute.manager [instance: 25263c4a-ade3-41f1-9c38-2a8812cdbbc8] HTTPInternalServerError: HTTPInternalServerError (HTTP 500)

  
  The nova-compute log showing the clean up is as follows (some lines to do with messaging etc have been removed):

  2015-01-08 22:39:15 14855 AUDIT nova.compute.manager [req-
  fe4615a4-8aa0-4bc5-a377-6530da5a7468 d3c81059cc6e46a7b53c33c0f1d1bc92
  76aac8b81d664000a08cbdec3fa52e5a] [instance: 25263c4a-
  ade3-41f1-9c38-2a8812cdbbc8] Terminating instance

  2015-01-08 22:39:15.361 14855 WARNING nova.virt.libvirt.driver [-]
  [instance: 25263c4a-ade3-41f1-9c38-2a8812cdbbc8] During wait destroy,
  instance disappeared.

  2015-01-08 22:39:15 14855 DEBUG nova.objects.instance [req-fe4615a4-8aa0-4bc5-a377-6530da5a7468 d3c81059cc6e46a7b53c33c0f1d1bc92 76aac8b81d664000a08cbdec3fa52e5a] Lazy-loading `system_metadata' on Instance uuid 25263c4a-ade3-41f1-9c38-2a8812c
  dbbc8 obj_load_attr /opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/objects/instance.py:572

  2015-01-08 22:39:15 14855 DEBUG nova.openstack.common.processutils [req-fe4615a4-8aa0-4bc5-a377-6530da5a7468 d3c81059cc6e46a7b53c33c0f1d1bc92 76aac8b81d664000a08cbdec3fa52e5a] Running cmd (subprocess): mv /mnt/state/var/lib/nova/instances/252
  63c4a-ade3-41f1-9c38-2a8812cdbbc8 /mnt/state/var/lib/nova/instances/25263c4a-ade3-41f1-9c38-2a8812cdbbc8_del execute /opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/openstack/common/processutils.py:161
  2015-01-08 22:39:15 14855 DEBUG nova.openstack.common.processutils [req-fe4615a4-8aa0-4bc5-a377-6530da5a7468 d3c81059cc6e46a7b53c33c0f1d1bc92 76aac8b81d664000a08cbdec3fa52e5a] Result was 0 execute /opt/stack/venvs/nova/local/lib/python2.7/sit
  e-packages/nova/openstack/common/processutils.py:195
  2015-01-08 22:39:15 14855 INFO nova.virt.libvirt.driver [req-fe4615a4-8aa0-4bc5-a377-6530da5a7468 d3c81059cc6e46a7b53c33c0f1d1bc92 76aac8b81d664000a08cbdec3fa52e5a] [instance: 25263c4a-ade3-41f1-9c38-2a8812cdbbc8] Deleting instance files /mnt
  /state/var/lib/nova/instances/25263c4a-ade3-41f1-9c38-2a8812cdbbc8_del
  2015-01-08 22:39:15 14855 INFO nova.virt.libvirt.driver [req-fe4615a4-8aa0-4bc5-a377-6530da5a7468 d3c81059cc6e46a7b53c33c0f1d1bc92 76aac8b81d664000a08cbdec3fa52e5a] [instance: 25263c4a-ade3-41f1-9c38-2a8812cdbbc8] Deletion of /mnt/state/var/l
  ib/nova/instances/25263c4a-ade3-41f1-9c38-2a8812cdbbc8_del complete

  
  2015-01-08 22:39:15 14855 DEBUG nova.compute.claims [req-fe4615a4-8aa0-4bc5-a377-6530da5a7468 d3c81059cc6e46a7b53c33c0f1d1bc92 76aac8b81d664000a08cbdec3fa52e5a] [instance: 25263c4a-ade3-41f1-9c38-2a8812cdbbc8] Aborting claim: [Claim: 512 MB memory, 2 GB disk] abort /opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/compute/claims.py:127
  2015-01-08 22:39:15 14855 DEBUG nova.openstack.common.lockutils [req-fe4615a4-8aa0-4bc5-a377-6530da5a7468 d3c81059cc6e46a7b53c33c0f1d1bc92 76aac8b81d664000a08cbdec3fa52e5a] Created new semaphore "compute_resources" internal_lock /opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/openstack/common/lockutils.py:206
  2015-01-08 22:39:15 14855 DEBUG nova.openstack.common.lockutils [req-fe4615a4-8aa0-4bc5-a377-6530da5a7468 d3c81059cc6e46a7b53c33c0f1d1bc92 76aac8b81d664000a08cbdec3fa52e5a] Acquired semaphore "compute_resources" lock /opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/openstack/common/lockutils.py:229
  2015-01-08 22:39:15 14855 DEBUG nova.openstack.common.lockutils [req-fe4615a4-8aa0-4bc5-a377-6530da5a7468 d3c81059cc6e46a7b53c33c0f1d1bc92 76aac8b81d664000a08cbdec3fa52e5a] Got semaphore / lock "abort_instance_claim" inner /opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/openstack/common/lockutils.py:271

  2015-01-08 22:39:15 14855 INFO nova.scheduler.client.report [req-fe4615a4-8aa0-4bc5-a377-6530da5a7468 d3c81059cc6e46a7b53c33c0f1d1bc92 76aac8b81d664000a08cbdec3fa52e5a] Compute_service record updated for ('overcloud-ce-novacompute5-novacompute5-73kkw7k6ose3', 'overcloud-ce-novacompute5-novacompute5-73kkw7k6ose3.novalocal')
  2015-01-08 22:39:15 14855 DEBUG nova.openstack.common.lockutils [req-fe4615a4-8aa0-4bc5-a377-6530da5a7468 d3c81059cc6e46a7b53c33c0f1d1bc92 76aac8b81d664000a08cbdec3fa52e5a] Releasing semaphore "compute_resources" lock /opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/openstack/common/lockutils.py:238
  2015-01-08 22:39:15 14855 DEBUG nova.openstack.common.lockutils [req-fe4615a4-8aa0-4bc5-a377-6530da5a7468 d3c81059cc6e46a7b53c33c0f1d1bc92 76aac8b81d664000a08cbdec3fa52e5a] Semaphore / lock released "abort_instance_claim" inner /opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/openstack/common/lockutils.py:275
  2015-01-08 22:39:15 14855 DEBUG nova.compute.utils [req-fe4615a4-8aa0-4bc5-a377-6530da5a7468 d3c81059cc6e46a7b53c33c0f1d1bc92 76aac8b81d664000a08cbdec3fa52e5a] [instance: 25263c4a-ade3-41f1-9c38-2a8812cdbbc8] HTTPInternalServerError (HTTP 500) notify_about_instance_usage /opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/compute/utils.py:307

  2015-01-08 22:39:15 14855 DEBUG nova.compute.manager [req-
  fe4615a4-8aa0-4bc5-a377-6530da5a7468 d3c81059cc6e46a7b53c33c0f1d1bc92
  76aac8b81d664000a08cbdec3fa52e5a] [instance: 25263c4a-
  ade3-41f1-9c38-2a8812cdbbc8] Build of instance 25263c4a-
  ade3-41f1-9c38-2a8812cdbbc8 was re-scheduled: HTTPInternalServerError
  (HTTP 500) do_build_and_run_instance
  /opt/stack/venvs/nova/local/lib/python2.7/site-
  packages/nova/compute/manager.py:2016

  After the clean up the listing the ports shows:

  root@my-server:# neutron port-show b4bbd25a-419e-4d0b-87f1-1786b004b441
  +-----------------------+-------------------------------------------------------------------------------------+
  | Field                 | Value                                                                               |
  +-----------------------+-------------------------------------------------------------------------------------+
  | admin_state_up        | True                                                                                |
  | allowed_address_pairs |                                                                                     |
  | binding:host_id       | novacompute5-73kkw7k6ose3                                                           |
  | binding:profile       | {}                                                                                  |
  | binding:vif_details   | {"port_filter": true, "ovs_hybrid_plug": true}                                      |
  | binding:vif_type      | ovs                                                                                 |
  | binding:vnic_type     | normal                                                                              |
  | device_id             | 25263c4a-ade3-41f1-9c38-2a8812cdbbc8                                                |
  | device_owner          | compute:None                                                                        |
  | extra_dhcp_opts       |                                                                                     |
  | fixed_ips             | {"subnet_id": "f88386ff-3a5d-41be-b84b-3076100cdcf0", "ip_address": "71.181.229.3"} |
  | id                    | b4bbd25a-419e-4d0b-87f1-1786b004b441                                                |
  | mac_address           | fa:16:3e:9a:33:d9                                                                   |
  | name                  |                                                                                     |
  | network_id            | 1ec2a8dd-19db-4ac9-926f-79d501546915                                                |
  | security_groups       | 973724f5-df43-4b29-a6ef-ef0619a05f49                                                |
  | status                | BUILD                                                                               |
  | tenant_id             | 76aac8b81d664000a08cbdec3fa52e5a                                                    |
  +-----------------------+-------------------------------------------------------------------------------------+

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1410739/+subscriptions


Follow ups

References