← Back to team overview

openstack team mailing list archive

Re: Error while resizing. Nova compute on XenServer 5.6

 

Matt,

I followed your advice and reinstalled nova-compute on domU nodes.
This solved the problem.

Thanks a lot,
Giuseppe




2011/9/21 Giuseppe Civitella <giuseppe.civitella@xxxxxxxxx>:
> Matt,
>
> a did check the xenapi connection to dom0 as described here
> http://wiki.openstack.org/XenServerDevelopment -> Test XenAPI
> and it worked.
> Anyway I'm able to directly deploy vms on the node that receives the VHD.
> Said so, is there somthing alse I should check?
>
> Thanks
> Giuseppe
>
>
> On Wednesday 21 September 2011 00:05:19 Matt Dietz wrote:
>> Giuseppe,
>>
>>       I'm not sure that's the error that's causing the problem, but it seems
>> indicative of a larger issue: namely that the other compute node (the one
>> that received the VHD in the migration) perhaps cannot talk to the dom0?
>>
>> Matt
>>
>> On 9/20/11 11:29 AM, "Giuseppe Civitella" <giuseppe.civitella@xxxxxxxxx>
>>
>> wrote:
>> >Hi all,
>> >
>> >I'm trying to resize an instance between two XenServer, each managed
>> >by a domU running Natty and current Diablo milestone.
>> >When I schedule the resize the instance gets a snapshot and its vhd
>> >file get  copyed by rsync on the other host. At this point the task
>> >fails.
>> >At this time there is a record on migrations table of cloud controller's
>> >mysql.
>> >On source host's  nova-compute log i can see:
>> >
>> >2011-09-20 18:11:34,446 DEBUG nova.rpc [-] received
>> >{u'_context_roles': [], u'_context_request_id':
>> >u'da2ac014-1600-4fa4-9be7-4c53add5caa4', u'_context_read_deleted':
>> >False, u'args': {u'instance_id':
>> >u'4548a3ed-2989-4fe7-bca9-bdf55bc9635a', u'migration_id': 12},
>> >u'_context_auth_token': None, u'_context_is_admin': True,
>> >u'_context_project_id': u'gcivitella_proj', u'_context_timestamp':
>> >u'2011-09-20T16:11:33.724787', u'_context_user_id':
>> >u'gcivitella_name', u'method': u'resize_instance',
>> >u'_context_remote_address': u'127.0.0.1'} from (pid=20478)
>> >process_data /usr/lib/pymodules/python2.7/nova/rpc/amqp.py:200
>> >2011-09-20 18:11:34,447 DEBUG nova.rpc [-] unpacked context:
>> >{'user_id': u'gcivitella_name', 'roles': [], 'timestamp':
>> >u'2011-09-20T16:11:33.724787', 'auth_token': None, 'msg_id': None,
>> >'remote_address': u'127.0.0.1', 'is_admin': True, 'request_id':
>> >u'da2ac014-1600-4fa4-9be7-4c53add5caa4', 'project_id':
>> >u'gcivitella_proj', 'read_deleted': False} from (pid=20478)
>> >_unpack_context /usr/lib/pymodules/python2.7/nova/rpc/amqp.py:432
>> >2011-09-20 18:11:34,447 INFO nova.compute.manager
>> >[da2ac014-1600-4fa4-9be7-4c53add5caa4 gcivitella_name gcivitella_proj]
>> >check_instance_lock: decorating: |<function resize_instance at
>> >0x362d758>|
>> >2011-09-20 18:11:34,448 INFO nova.compute.manager
>> >[da2ac014-1600-4fa4-9be7-4c53add5caa4 gcivitella_name gcivitella_proj]
>> >check_instance_lock: arguments: |<nova.compute.manager.ComputeManager
>> >object at 0x2f56810>| |<nova.rpc.amqp.RpcContext object at 0x4b46e50>|
>> >
>> >|4548a3ed-2989-4fe7-bca9-bdf55bc9635a|
>> >
>> >2011-09-20 18:11:34,448 DEBUG nova.compute.manager
>> >[da2ac014-1600-4fa4-9be7-4c53add5caa4 gcivitella_name gcivitella_proj]
>> >instance 4548a3ed-2989-4fe7-bca9-bdf55bc9635a: getting locked state
>> >from (pid=20478) get_lock
>> >/usr/lib/pymodules/python2.7/nova/compute/manager.py:1121
>> >2011-09-20 18:11:34,522 INFO nova.compute.manager
>> >[da2ac014-1600-4fa4-9be7-4c53add5caa4 gcivitella_name gcivitella_proj]
>> >check_instance_lock: locked: |False|
>> >2011-09-20 18:11:34,522 INFO nova.compute.manager
>> >[da2ac014-1600-4fa4-9be7-4c53add5caa4 gcivitella_name gcivitella_proj]
>> >check_instance_lock: admin: |True|
>> >2011-09-20 18:11:34,523 INFO nova.compute.manager
>> >[da2ac014-1600-4fa4-9be7-4c53add5caa4 gcivitella_name gcivitella_proj]
>> >check_instance_lock: executing: |<function resize_instance at
>> >0x362d758>|
>> >2011-09-20 18:11:34,716 DEBUG nova [-] Starting snapshot for VM
>> ><nova.db.sqlalchemy.models.Instance object at 0x496eb10> from
>> >(pid=20478) _get_snapshot
>> >/usr/lib/pymodules/python2.7/nova/virt/xenapi/vmops.py:515
>> >2011-09-20 18:11:34,723 DEBUG nova.virt.xenapi.vm_utils [-]
>> >Snapshotting VM OpaqueRef:defbf1c9-f3e6-3822-f110-7b61a35b7c18 with
>> >label 'instance-00000048-snapshot'... from (pid=20478) create_snapshot
>> >/usr/lib/pymodules/python2.7/nova/virt/xenapi/vm_utils.py:338
>> >2011-09-20 18:11:36,337 INFO nova.virt.xenapi [-] Task
>> >[Async.VM.snapshot] OpaqueRef:7ca66e5a-261e-5e7b-441e-dca99ab2a1a8
>> >status: success
>> ><value>OpaqueRef:05b68453-bee2-9c9b-db33-2bebe97cbdf7</value>
>> >2011-09-20 18:11:36,361 DEBUG nova.virt.xenapi.vm_utils [-] Created
>> >snapshot OpaqueRef:05b68453-bee2-9c9b-db33-2bebe97cbdf7 from VM
>> >OpaqueRef:defbf1c9-f3e6-3822-f110-7b61a35b7c18. from (pid=20478)
>> >create_snapshot
>> >/usr/lib/pymodules/python2.7/nova/virt/xenapi/vm_utils.py:352
>> >2011-09-20 18:11:36,361 DEBUG nova.virt.xenapi.vm_utils [-]
>> >Re-scanning SR OpaqueRef:d02abfe8-c781-f49f-81b2-6b981ed98097 from
>> >(pid=20478) scan_sr
>> >/usr/lib/pymodules/python2.7/nova/virt/xenapi/vm_utils.py:804
>> >2011-09-20 18:11:36,916 INFO nova.virt.xenapi [-] Task [Async.SR.scan]
>> >OpaqueRef:9514755f-8b94-dea1-5c0b-689bbe282804 status: success
>> >2011-09-20 18:11:36,940 DEBUG nova.virt.xenapi.vm_utils [-] VHD
>> >9d2ec65d-aeb4-4e6c-811f-78111d5835fc has parent
>> >OpaqueRef:1f83df6e-d9aa-90cb-e2d9-b3c672c7ec4d from (pid=20478)
>> >get_vhd_parent
>> >/usr/lib/pymodules/python2.7/nova/virt/xenapi/vm_utils.py:840
>> >
>> >and then the error that stops the task:
>> >
>> >2011-09-20 18:12:28,311 DEBUG nova.manager [-] Notifying Schedulers of
>> >capabilities ... from (pid=20478) periodic_tasks
>> >/usr/lib/pymodules/python2.7/nova/manager.py:111
>> >2011-09-20 18:12:28,312 DEBUG nova.rpc [-] Making asynchronous fanout
>> >cast... from (pid=20478) fanout_cast
>> >/usr/lib/pymodules/python2.7/nova/rpc/amqp.py:545
>> >2011-09-20 18:12:28,312 INFO nova.rpc [-] Creating "scheduler_fanout"
>> >fanout exchange
>> >2011-09-20 18:12:28,737 INFO nova.virt.xenapi.vm_utils [-] (VM_UTILS)
>> >xenserver vm state -> |Running|
>> >2011-09-20 18:12:28,737 INFO nova.virt.xenapi.vm_utils [-] (VM_UTILS)
>> >xenapi power_state -> |1|
>> >2011-09-20 18:12:28,811 ERROR nova [-] in looping call
>> >(nova): TRACE: Traceback (most recent call last):
>> >(nova): TRACE:   File "/usr/lib/pymodules/python2.7/nova/utils.py",
>> >line 487, in _inner
>> >(nova): TRACE:     self.f(*self.args, **self.kw)
>> >(nova): TRACE:   File
>> >"/usr/lib/pymodules/python2.7/nova/virt/xenapi_conn.py", line 408, in
>> >_poll_task
>> >(nova): TRACE:     name = self._session.xenapi.task.get_name_label(task)
>> >(nova): TRACE:   File
>> >"/usr/local/lib/python2.7/dist-packages/XenAPI.py", line 229, in
>> >__call__
>> >(nova): TRACE:     return self.__send(self.__name, args)
>> >(nova): TRACE:   File
>> >"/usr/local/lib/python2.7/dist-packages/XenAPI.py", line 133, in
>> >xenapi_request
>> >(nova): TRACE:     result = _parse_result(getattr(self,
>> >methodname)(*full_params))
>> >(nova): TRACE:   File "/usr/lib/python2.7/xmlrpclib.py", line 1224, in
>> >__call__
>> >(nova): TRACE:     return self.__send(self.__name, args)
>> >(nova): TRACE:   File "/usr/lib/python2.7/xmlrpclib.py", line 1575, in
>> >__request
>> >(nova): TRACE:     verbose=self.__verbose
>> >(nova): TRACE:   File "/usr/lib/python2.7/xmlrpclib.py", line 1264, in
>> >request
>> >(nova): TRACE:     return self.single_request(host, handler,
>> >request_body, verbose)
>> >(nova): TRACE:   File "/usr/lib/python2.7/xmlrpclib.py", line 1289, in
>> >single_request
>> >(nova): TRACE:     self.send_request(h, handler, request_body)
>> >(nova): TRACE:   File "/usr/lib/python2.7/xmlrpclib.py", line 1391, in
>> >send_request
>> >(nova): TRACE:     connection.putrequest("POST", handler,
>> >skip_accept_encoding=True)
>> >(nova): TRACE:   File "/usr/lib/python2.7/httplib.py", line 853, in
>> >putrequest
>> >(nova): TRACE:     raise CannotSendRequest()
>> >(nova): TRACE: CannotSendRequest
>> >(nova): TRACE:
>> >
>> >At the moment I'm stuck at this point.
>> >Any idea about how to debug this error?
>> >
>> >Thanks a lot
>> >Giuseppe
>> >
>> >_______________________________________________
>> >Mailing list: https://launchpad.net/~openstack
>> >Post to     : openstack@xxxxxxxxxxxxxxxxxxx
>> >Unsubscribe : https://launchpad.net/~openstack
>> >More help   : https://help.launchpad.net/ListHelp
>>
>> This email may include confidential information. If you received it in
>> error, please delete it.
>


References