yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #11948
[Bug 1171226] Re: VMwareVCDriver: Sparse disk copy error on spawn
** Changed in: nova/grizzly
Status: Fix Committed => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1171226
Title:
VMwareVCDriver: Sparse disk copy error on spawn
Status in OpenStack Compute (Nova):
Fix Released
Status in OpenStack Compute (nova) grizzly series:
Fix Released
Status in The OpenStack VMwareAPI subTeam:
Fix Committed
Bug description:
Not sure if this is a real bug, or just a case of inadequate
documentation combining with bad error reporting. I get an exception
(below) when booting a VM. The exception happens after glance is done
streaming the disk image to VC (i.e., I see the image in the
vmware_source folder in the DataSource) and it prevents the VM from
actually booting.
I tried two different ways of adding the image to glance (both as
'ovf' and as 'bare') neither of which seemed to make a difference:
glance add name="Ubuntu-ovf" disk_format=vmdk container_format=ovf
is_public=true vmware_adaptertype="lsiLogic"
vmware_ostype="ubuntuGuest" vmware_disktype="sparse" <
~/ubuntu12.04-sparse.vmdk
glance add name="Ubuntu-bare" disk_format=vmdk container_format=bare
is_public=true vmware_adaptertype="lsiLogic"
vmware_ostype="ubuntuGuest" vmware_disktype="sparse" <
~/ubuntu12.04-sparse.vmdk
In both cases, I see this exception (note: there actually seems to be
a second exception to, perhaps due to inproper error handling with the
first):
2013-04-21 11:35:07 ERROR [nova.compute.manager] Error: ['Traceback (most recent call last):\n', ' File "/opt/stack/nova/nova/
compute/manager.py", line 905, in _run_instance\n set_access_ip=set_access_ip)\n', ' File "/opt/stack/nova/nova/compute/manage
r.py", line 1165, in _spawn\n LOG.exception(_(\'Instance failed to spawn\'), instance=instance)\n', ' File "/usr/lib/python2.7
/contextlib.py", line 24, in __exit__\n self.gen.next()\n', ' File "/opt/stack/nova/nova/compute/manager.py", line 1161, in _s
pawn\n block_device_info)\n', ' File "/opt/stack/nova/nova/virt/vmwareapi/driver.py", line 176, in spawn\n block_device_inf
o)\n', ' File "/opt/stack/nova/nova/virt/vmwareapi/vmops.py", line 398, in spawn\n _copy_virtual_disk()\n', ' File "/opt/stac
k/nova/nova/virt/vmwareapi/vmops.py", line 340, in _copy_virtual_disk\n self._session._wait_for_task(instance[\'uuid\'], vmdk_c
opy_task)\n', ' File "/opt/stack/nova/nova/virt/vmwareapi/driver.py", line 558, in _wait_for_task\n ret_val = done.wait()\n',
' File "/usr/local/lib/python2.7/dist-packages/eventlet/event.py", line 116, in wait\n return hubs.get_hub().switch()\n', ' F
ile "/usr/local/lib/python2.7/dist-packages/eventlet/hubs/hub.py", line 187, in switch\n return self.greenlet.switch()\n', 'Nov
aException: The requested operation is not implemented by the server.\n']
2013-04-21 11:35:07 DEBUG [nova.openstack.common.rpc.amqp] Making synchronous call on conductor ...
2013-04-21 11:35:07 DEBUG [nova.openstack.common.rpc.amqp] MSG_ID is 2318255c5a4f4e5783cefb3cfde9e563
2013-04-21 11:35:07 DEBUG [nova.openstack.common.rpc.amqp] UNIQUE_ID is f710f7acfd774af3ba1aa91515b1fd05.
2013-04-21 11:35:10 WARNING [nova.virt.vmwareapi.driver] Task [CopyVirtualDisk_Task] (returnval){
value = "task-925"
_type = "Task"
} status: error The requested operation is not implemented by the server.
2013-04-21 11:35:10 WARNING [nova.virt.vmwareapi.driver] In vmwareapi:_poll_task, Got this error Trying to re-send() an already-triggered event.
2013-04-21 11:35:10 ERROR [nova.utils] in fixed duration looping call
Traceback (most recent call last):
File "/opt/stack/nova/nova/utils.py", line 595, in _inner
self.f(*self.args, **self.kw)
File "/opt/stack/nova/nova/virt/vmwareapi/driver.py", line 584, in _poll_task
done.send_exception(excep)
File "/usr/local/lib/python2.7/dist-packages/eventlet/event.py", line 208, in send_exception
return self.send(None, args)
File "/usr/local/lib/python2.7/dist-packages/eventlet/event.py", line 150, in send
assert self._result is NOT_USED, 'Trying to re-send() an already-triggered event.'
AssertionError: Trying to re-send() an already-triggered event.
To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1171226/+subscriptions