yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #21072
[Bug 1055399] Re: windows image upload on xen host results in sr scan failure
no response for almost two years. Closing
** Changed in: glance
Status: Incomplete => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1055399
Title:
windows image upload on xen host results in sr scan failure
Status in OpenStack Image Registry and Delivery Service (Glance):
Invalid
Bug description:
Using openstack essex stable release
Created a windows vhd file of 20gb by installing from a windows7 iso
file using virtualbox. This windows vhd has xen guest utilities(xen
tools) and openstack guest agents from Mirantis installed.(Basic
requirements to run vm under xen hypervisor)
After which added this image to glance server by
$ tar -cv image.vhd | gzip > image_vhd.tar.bz
$ glance add name="My VHD image" is_public=True disk_format=vhd container_format=ovf image_state=available < image_vhd.tar.bz
The image gets uploaded into the glance server. When I try to launch
this image, my guess is that it gets uploaded into xen host's (dom0)
storage repository before it could be launched. The vhd image does get
uploaded into the xen storage repository and for some reason the
storage repository scan fails after the image upload is complete.
The error that I see in the nova-compute running on the domU is:
2012-09-20 18:29:13 INFO nova.compute.manager [-] Updating host status
2012-09-20 18:29:13 DEBUG nova.virt.xenapi.host [-] Updating host stats from (pid=9241) update_status /usr/lib/python2.7/dist-packages/nova/virt/xenapi/host.py:129
2012-09-20 18:29:24 DEBUG nova.manager [-] Running periodic task ComputeManager._poll_unconfirmed_resizes from (pid=9241) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:152
2012-09-20 18:30:14 DEBUG nova.virt.xenapi.vm_utils [req-3ca4a669-5676-4b13-b480-1aa932c4c48a 5bc95ec2661443f2a071a1a56583db0a fbd086a6dc8f40a9bc988076825b297d] xapi 'download_vhd' returned VDI of type 'os' with UUID 'b6259d09-4ce8-4677-a64d-4f09b10d9707' from (pid=9241) _fetch_image_glance_vhd /usr/lib/python2.7/dist-packages/nova/virt/xenapi/vm_utils.py:748
2012-09-20 18:30:14 DEBUG nova.virt.xenapi.vm_utils [req-3ca4a669-5676-4b13-b480-1aa932c4c48a 5bc95ec2661443f2a071a1a56583db0a fbd086a6dc8f40a9bc988076825b297d] Re-scanning SR OpaqueRef:b4d052cd-8a50-5fb9-94af-8ffe332ddad6 from (pid=9241) scan_sr /usr/lib/python2.7/dist-packages/nova/virt/xenapi/vm_utils.py:1097
2012-09-20 18:30:16 ERROR nova.utils [req-3ca4a669-5676-4b13-b480-1aa932c4c48a 5bc95ec2661443f2a071a1a56583db0a fbd086a6dc8f40a9bc988076825b297d] Instance 5031c11b-02fc-4de9-9516-fad63435a784: Failed to spawn, rolling back.
2012-09-20 18:30:16 TRACE nova.utils Traceback (most recent call last):
2012-09-20 18:30:16 TRACE nova.utils File "/usr/lib/python2.7/dist-packages/nova/virt/xenapi/vmops.py", line 346, in spawn
2012-09-20 18:30:16 TRACE nova.utils vdis = create_disks_step(undo_mgr)
2012-09-20 18:30:16 TRACE nova.utils File "/usr/lib/python2.7/dist-packages/nova/virt/xenapi/vmops.py", line 138, in inner
2012-09-20 18:30:16 TRACE nova.utils rv = f(*args, **kwargs)
2012-09-20 18:30:16 TRACE nova.utils File "/usr/lib/python2.7/dist-packages/nova/virt/xenapi/vmops.py", line 265, in create_disks_step
2012-09-20 18:30:16 TRACE nova.utils vdis = self._create_disks(context, instance, image_meta)
2012-09-20 18:30:16 TRACE nova.utils File "/usr/lib/python2.7/dist-packages/nova/virt/xenapi/vmops.py", line 242, in _create_disks
2012-09-20 18:30:16 TRACE nova.utils disk_image_type)
2012-09-20 18:30:16 TRACE nova.utils File "/usr/lib/python2.7/dist-packages/nova/virt/xenapi/vm_utils.py", line 626, in create_image
2012-09-20 18:30:16 TRACE nova.utils project_id, image_type)
2012-09-20 18:30:16 TRACE nova.utils File "/usr/lib/python2.7/dist-packages/nova/virt/xenapi/vm_utils.py", line 684, in fetch_image
2012-09-20 18:30:16 TRACE nova.utils session, instance, image, image_type)
2012-09-20 18:30:16 TRACE nova.utils File "/usr/lib/python2.7/dist-packages/nova/virt/xenapi/vm_utils.py", line 750, in _fetch_image_glance_vhd
2012-09-20 18:30:16 TRACE nova.utils cls.scan_sr(session, sr_ref)
2012-09-20 18:30:16 TRACE nova.utils File "/usr/lib/python2.7/dist-packages/nova/virt/xenapi/vm_utils.py", line 1098, in scan_sr
2012-09-20 18:30:16 TRACE nova.utils session.call_xenapi('SR.scan', sr_ref)
2012-09-20 18:30:16 TRACE nova.utils File "/usr/lib/python2.7/dist-packages/nova/virt/xenapi_conn.py", line 574, in call_xenapi
2012-09-20 18:30:16 TRACE nova.utils return tpool.execute(f, *args)
2012-09-20 18:30:16 TRACE nova.utils File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 76, in tworker
2012-09-20 18:30:16 TRACE nova.utils rv = meth(*args,**kwargs)
2012-09-20 18:30:16 TRACE nova.utils File "/usr/local/lib/python2.7/dist-packages/XenAPI.py", line 229, in __call__
2012-09-20 18:30:16 TRACE nova.utils return self.__send(self.__name, args)
2012-09-20 18:30:16 TRACE nova.utils File "/usr/local/lib/python2.7/dist-packages/XenAPI.py", line 133, in xenapi_request
2012-09-20 18:30:16 TRACE nova.utils result = _parse_result(getattr(self, methodname)(*full_params))
2012-09-20 18:30:16 TRACE nova.utils File "/usr/local/lib/python2.7/dist-packages/XenAPI.py", line 203, in _parse_result
2012-09-20 18:30:16 TRACE nova.utils raise Failure(result['ErrorDescription'])
2012-09-20 18:30:16 TRACE nova.utils Failure: ['INTERNAL_ERROR', 'SR.scan', 'OpaqueRef:b4d052cd-8a50-5fb9-94af-8ffe332ddad6', '["Failure", ["Internal_error", "SR_BACKEND_FAILURE_40: [ ; The SR scan failed [opterr=uuid=b6259d09-4ce8-4677-a64d-4f09b10d9707]; ]"]]']
2012-09-20 18:30:16 TRACE nova.utils
2012-09-20 18:30:16 ERROR nova.compute.manager [req-3ca4a669-5676-4b13-b480-1aa932c4c48a 5bc95ec2661443f2a071a1a56583db0a fbd086a6dc8f40a9bc988076825b297d] [instance: 5031c11b-02fc-4de9-9516-fad63435a784] Instance failed to spawn
Once I terminate the instance from dashboard, the instance is removed from the openstack but not from xen's storage repository. The storage repository's scan functionality reports the same error even after deleting the openstack instance. The workaround I am using to bring back the storage repository to function correctly is by deleting the vhd file on dom0 from
cd /run/sr-mount/SR-UUID/
and delete the vhd file corresponding to the windows image(20gb).
After which the storage repository is back to normal. Ideally this
should happen when the instance was terminated.
To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1055399/+subscriptions