← Back to team overview

openstack team mailing list archive

Re: NovaCompute on XenServer6

 

Hi all,

I did some further testing and found something that works for me as a
workaround.
Since I was getting an error like this on XE6:
[20111003T08:57:36.775Z|debug|xe1|398170 inet-RPC|VDI.resize_online
R:3fa63f98fab0|audit] VDI.resize_online: VDI =
'2e79ace3-5cb3-418c-9aad-58faf218c09e'; size = 42949672960
I had a look at the XenServer Administrator guide
(http://docs.vmd.citrix.com/XenServer/6.0.0/1.0/en_gb/reference.html#cli-xe-commands_vdi)
and found vdi-resize as only method to resize a VDI.
At row 625 of nova/virt/xenapi/vmops.py I changed:
self._session.call_xenapi('VDI.resize_online', vdi_ref,
to
self._session.call_xenapi('VDI.resize', vdi_ref,

Now I'm able to resize a VM on XS6 but, since I'm not an XS expert,
I'd appreciate if someone can confirm the second xenapi call as a
valid solution.

Thanks a lot
Giuseppe



2011/9/30 Giuseppe Civitella <giuseppe.civitella@xxxxxxxxx>:
> Hi Paul,
>
> I just reinstalled the servers and did some quick testing.
> Image deploy does not show any problem, I did deploy a win2k8 and a
> Centos5.7 image (they still have xentools 5.6 installed).
> The new vms get placed on the rigth vlan by openvswitch.
> Unfortunately at my first resize I came into a problem. I was trying
> to resize the Centos vm. The whole snapshot, rsync and shutdown source
> process worked as I was expecting.
> At the moment of disk resize on destination host I got a
> SR_OPERATION_NOT_SUPPORTED exception.
> On destination host's side I got the following lines in nova-compute.log:
>
> 2011-09-30 12:15:28,647 DEBUG nova.virt.xenapi.vm_utils [-] Detected
> DISK_VHD format for image 58, instance 29 from (pid=7268)
> log_disk_format /root/nova-2011.3/nova/virt/xenapi/vm_utils.py:627
> 2011-09-30 12:15:28,712 DEBUG nova.virt.xenapi.vm_utils [-] Created VM
> instance-0000001d... from (pid=7268) create_vm
> /root/nova-2011.3/nova/virt/xenapi/vm_utils.py:188
> 2011-09-30 12:15:28,770 DEBUG nova.virt.xenapi.vm_utils [-] Created VM
> instance-0000001d as OpaqueRef:b2d1340a-a766-2373-0c46-abdbbaa506f1.
> from (pid=7268) create_vm
> /root/nova-2011.3/nova/virt/xenapi/vm_utils.py:191
> 2011-09-30 12:15:28,771 DEBUG nova.virt.xenapi.vm_utils [-] Creating
> VBD for VM OpaqueRef:b2d1340a-a766-2373-0c46-abdbbaa506f1, VDI
> OpaqueRef:1d6bd89f-8f18-0301-0610-f943dfbaf310 ...  from (pid=7268)
> create_vbd /root/nova-2011.3/nova/virt/xenapi/vm_utils.py:223
> 2011-09-30 12:15:28,857 DEBUG nova.virt.xenapi.vm_utils [-] Created
> VBD OpaqueRef:93b43f3c-3378-aa18-6870-f20c6a985b8f for VM
> OpaqueRef:b2d1340a-a766-2373-0c46-abdbbaa506f1, VDI
> OpaqueRef:1d6bd89f-8f18-0301-0610-f943dfbaf310. from (pid=7268)
> create_vbd /root/nova-2011.3/nova/virt/xenapi/vm_utils.py:226
> 2011-09-30 12:15:28,857 DEBUG nova [-] creating vif(s) for vm:
> |OpaqueRef:b2d1340a-a766-2373-0c46-abdbbaa506f1| from (pid=7268)
> create_vifs /root/nova-2011.3/nova/virt/xenapi/vmops.py:1128
> 2011-09-30 12:15:29,071 DEBUG nova.virt.xenapi.vmops [-] Creating VIF
> for VM OpaqueRef:b2d1340a-a766-2373-0c46-abdbbaa506f1, network
> OpaqueRef:e116be7e-cb19-73a1-a0c9-46cc28f03ee6. from (pid=7268)
> create_vifs /root/nova-2011.3/nova/virt/xenapi/vmops.py:1138
> 2011-09-30 12:15:29,169 DEBUG nova.virt.xenapi.vmops [-] Created VIF
> OpaqueRef:261bd276-beaf-83f9-037a-bdcbb152b9b8 for VM
> OpaqueRef:b2d1340a-a766-2373-0c46-abdbbaa506f1, network
> OpaqueRef:e116be7e-cb19-73a1-a0c9-46cc28f03ee6. from (pid=7268)
> create_vifs /root/nova-2011.3/nova/virt/xenapi/vmops.py:1141
> 2011-09-30 12:15:29,241 DEBUG nova [-] injecting network info to xs
> for vm: |OpaqueRef:b2d1340a-a766-2373-0c46-abdbbaa506f1| from
> (pid=7268) inject_network_info
> /root/nova-2011.3/nova/virt/xenapi/vmops.py:1110
> 2011-09-30 12:15:30,389 INFO nova.virt.xenapi [-] Task
> [Async.host.call_plugin]
> OpaqueRef:b5c054fc-5c6d-ed89-d732-1d6d2238a443 status: success
> <value>{&quot;should_create_bridge&quot;: true, &quot;dns&quot;: [],
> &quot;vif_uuid&quot;:
> &quot;5e485511-1a43-4958-b3ca-a83443788d48&quot;, &quot;label&quot;:
> &quot;gcivitella_proj_net&quot;, &quot;broadcast&quot;:
> &quot;10.12.2.255&quot;, &quot;ips&quot;: [{&quot;ip&quot;:
> &quot;10.12.2.8&quot;, &quot;netmask&quot;: &quot;255.255.255.0&quot;,
> &quot;enabled&quot;: &quot;1&quot;}], &quot;mac&quot;:
> &quot;02:16:3e:06:41:c1&quot;, &quot;should_create_vlan&quot;: true,
> &quot;dhcp_server&quot;: &quot;10.12.2.1&quot;, &quot;gateway&quot;:
> &quot;10.12.2.1&quot;}</value>
> 2011-09-30 12:15:30,391 DEBUG nova [-] injecting hostname to xs for
> vm: |OpaqueRef:b2d1340a-a766-2373-0c46-abdbbaa506f1| from (pid=7268)
> inject_hostname /root/nova-2011.3/nova/virt/xenapi/vmops.py:1164
> 2011-09-30 12:15:30,454 DEBUG nova.virt.xenapi.vmops [-] Resizing VDI
> a230026a-6791-47db-b5be-4bd85bff9b14 for instanceinstance-0000001d.
> Expanding to 21 GB from (pid=7268) resize_instance
> /root/nova-2011.3/nova/virt/xenapi/vmops.py:622
> 2011-09-30 12:15:30,660 ERROR nova.exception [-] Uncaught exception
> (nova.exception): TRACE: None
> (nova.exception): TRACE:
> 2011-09-30 12:15:30,660 ERROR nova.rpc [-] Exception during message handling
> (nova.rpc): TRACE: Traceback (most recent call last):
> (nova.rpc): TRACE:   File "/root/nova-2011.3/nova/rpc/impl_kombu.py",
> line 620, in _process_data
> (nova.rpc): TRACE:     rval = node_func(context=ctxt, **node_args)
> (nova.rpc): TRACE:   File "/root/nova-2011.3/nova/exception.py", line
> 129, in wrapped
> (nova.rpc): TRACE:     raise Error(str(e))
> (nova.rpc): TRACE: Error: ['SR_OPERATION_NOT_SUPPORTED',
> 'OpaqueRef:7a1fcca3-d9b5-914d-60f2-8d3ea370433d']
> (nova.rpc): TRACE:
>
>
> In xenserver's xensource.log I can see:
>
> [20110930T10:15:29.900Z|debug|xe1|85929
> inet-RPC|host.compute_free_memory R:901dddbfe852|audit]
> Host.compute_free_memory: host = 'b4e0f6c9-7494-44a7-98b1-b2c3f7aeb1cc
> (xe1)'
> [20110930T10:15:30.031Z|debug|xe1|85931 inet-RPC|VM.create
> R:ff960d9aa286|audit] VM.create: name_label = 'instance-0000001d'
> name_description = ''
> [20110930T10:15:30.133Z|debug|xe1|85932 inet-RPC|VBD.create
> R:4b5bf93b6afc|audit] VBD.create: VM =
> '05ab9a68-5feb-52f3-6150-3b120438e50e (instance-0000001d)'; VDI =
> 'a230026a-6791-47db-b5be-4bd85bff9b14'
> [20110930T10:15:30.135Z|debug|xe1|85932 inet-RPC|VBD.create
> R:4b5bf93b6afc|xapi] VBD.create (device = 0; uuid =
> ec318785-3525-4463-7133-d34092b72f55; ref =
> OpaqueRef:93b43f3c-3378-aa18-6870-f20c6a985b8f)
> [20110930T10:15:30.436Z|debug|xe1|85937 inet-RPC|VIF.create
> R:0a9e51e31e93|audit] VIF.create: VM =
> '05ab9a68-5feb-52f3-6150-3b120438e50e (instance-0000001d)'; network =
> 'b65b9615-b1d1-bdea-b0a2-6ae7a5fc95d9'
> [20110930T10:15:30.437Z|debug|xe1|85937 inet-RPC|VIF.create
> R:0a9e51e31e93|xapi] VIF.create running
> [20110930T10:15:30.440Z|debug|xe1|85937 inet-RPC|VIF.create
> R:0a9e51e31e93|xapi] Found mac_seed on VM: supplied MAC parameter =
> '02:16:3e:06:41:c1'
> [20110930T10:15:30.456Z|debug|xe1|85937 inet-RPC|VIF.create
> R:0a9e51e31e93|xapi] VIF
> ref='OpaqueRef:261bd276-beaf-83f9-037a-bdcbb152b9b8' created (VM =
> 'OpaqueRef:b2d1340a-a766-2373-0c46-abdbbaa506f1'; MAC address =
> '02:16:3e:06:41:c1')
> [20110930T10:15:30.690Z| info|xe1|85941
> inet-RPC|dispatch:VM.remove_from_xenstore_data
> D:cc06dcdd3697|api_effect] VM.remove_from_xenstore_data
> [20110930T10:15:30.745Z| info|xe1|85942
> inet-RPC|dispatch:VM.add_to_xenstore_data D:47b320eba5b3|api_effect]
> VM.add_to_xenstore_data
> [20110930T10:15:30.900Z| info|xe1|85946|Async.host.call_plugin
> R:b5c054fc5c6d|dispatcher] spawning a new thread to handle the current
> task (trackid=a1b71470b94e5aa1592f85a58162baab)
> [20110930T10:15:30.901Z|debug|xe1|85946|Async.host.call_plugin
> R:b5c054fc5c6d|audit] Host.call_plugin host =
> 'b4e0f6c9-7494-44a7-98b1-b2c3f7aeb1cc (xe1)'; plugin = 'xenstore.py';
> fn = 'write_record'; args = [ path: vm-data/networking/02163e0641c1;
> dom_id: -1; value: {"should_create_bridge": true, "dns": [],
> "vif_uuid": "5e485511-1a43-4958-b3ca-a83443788d48", "label":
> "gcivitella_proj_net", "broadcast": "10.12.2.255", "ips": [{"ip":
> "10.12.2.8", "netmask": "255.255.255.0", "enabled": "1"}], "mac":
> "02:16:3e:06:41:c1", "should_create_vlan": true, "dhcp_server":
> "10.12.2.1", "gateway": "10.12.2.1"} ]
> [20110930T10:15:31.745Z| info|xe1|85952
> inet-RPC|dispatch:VM.add_to_xenstore_data D:643662c2c578|api_effect]
> VM.add_to_xenstore_data
> [20110930T10:15:31.859Z|debug|xe1|85954 inet-RPC|VDI.resize_online
> R:9267b132d2e2|audit] VDI.resize_online: VDI =
> 'a230026a-6791-47db-b5be-4bd85bff9b14'; size = 22548578304
> [20110930T10:15:31.861Z|debug|xe1|85954 inet-RPC|VDI.resize_online
> R:9267b132d2e2|xapi] Marking SR for VDI.resize_online
> (task=OpaqueRef:9267b132-d2e2-8d8e-6307-807b2f242059)
> [20110930T10:15:31.869Z|debug|xe1|85954 inet-RPC|VDI.resize_online
> R:9267b132d2e2|backtrace] Raised at xapi_vdi.ml:128.26-57 ->
> message_forwarding.ml:2870.4-56 -> message_forwarding.ml:281.6-9
> [20110930T10:15:31.878Z|debug|xe1|85954 inet-RPC|VDI.resize_online
> R:9267b132d2e2|xapi] Caught exception while
> SR_OPERATION_NOT_SUPPORTED: [
> OpaqueRef:7a1fcca3-d9b5-914d-60f2-8d3ea370433d ] in message forwarder:
> marking VDI for VDI.resize_online
> [20110930T10:15:31.878Z|debug|xe1|85954 inet-RPC|VDI.resize_online
> R:9267b132d2e2|xapi] Unmarking SR after VDI.resize_online
> (task=OpaqueRef:9267b132-d2e2-8d8e-6307-807b2f242059)
> [20110930T10:15:31.890Z|debug|xe1|85954 inet-RPC|VDI.resize_online
> R:9267b132d2e2|backtrace] Raised at message_forwarding.ml:2884.12-13
> -> threadext.ml:20.20-24 -> threadext.ml:20.62-65 ->
> message_forwarding.ml:149.16-22 -> message_forwarding.ml:2876.6-362 ->
> rbac.ml:229.16-23
> [20110930T10:15:31.890Z|debug|xe1|85954 inet-RPC|VDI.resize_online
> R:9267b132d2e2|backtrace] Raised at rbac.ml:238.10-15 ->
> server_helpers.ml:78.11-41
> [20110930T10:15:31.890Z|debug|xe1|85954 inet-RPC|VDI.resize_online
> R:9267b132d2e2|dispatcher] Server_helpers.exec exception_handler: Got
> exception SR_OPERATION_NOT_SUPPORTED: [
> OpaqueRef:7a1fcca3-d9b5-914d-60f2-8d3ea370433d ]
> [20110930T10:15:31.890Z|debug|xe1|85954 inet-RPC|VDI.resize_online
> R:9267b132d2e2|dispatcher] Raised at string.ml:150.25-34 ->
> stringext.ml:108.13-29
> [20110930T10:15:31.890Z|debug|xe1|85954 inet-RPC|VDI.resize_online
> R:9267b132d2e2|backtrace] Raised at string.ml:150.25-34 ->
> stringext.ml:108.13-29
> [20110930T10:15:31.897Z|debug|xe1|85954 inet-RPC|VDI.resize_online
> R:9267b132d2e2|xapi] Raised at server_helpers.ml:93.14-15 ->
> pervasiveext.ml:22.2-9
> [20110930T10:15:31.900Z|debug|xe1|85954 inet-RPC|VDI.resize_online
> R:9267b132d2e2|xapi] Raised at pervasiveext.ml:26.22-25 ->
> pervasiveext.ml:22.2-9
> [20110930T10:15:31.901Z|debug|xe1|85954
> inet-RPC|dispatch:VDI.resize_online D:25d29d5414fb|xapi] Raised at
> pervasiveext.ml:26.22-25 -> pervasiveext.ml:22.2-9
> [20110930T10:15:31.901Z|debug|xe1|85954
> inet-RPC|dispatch:VDI.resize_online D:25d29d5414fb|backtrace] Raised
> at pervasiveext.ml:26.22-25 -> server_helpers.ml:152.10-106 ->
> server.ml:25016.19-167 -> server_helpers.ml:118.4-7
>
> The resize job was supposed to create a new vm with a 21GB disk.
> The vm was created on XS 5.6 using a Centos 64 bit template. Can this
> be the source of the problem?
>
> Thanks a lot
> Giuseppe
>
>
>
>
>
>
>
>
>
>
>
> 2011/9/29 Paul Voccio <paul.voccio@xxxxxxxxxxxxx>:
>> Hi Giuseppe,
>>
>> The docs should still work with 6. We haven't done any testing with 6 and the plugins yet but should do some testing soon. Let me know if you run into issues and we can help you work through them.
>>
>> Pvo
>>
>>
>>
>> On Sep 29, 2011, at 11:08 AM, "Giuseppe Civitella" <giuseppe.civitella@xxxxxxxxx> wrote:
>>
>>> Hi all,
>>>
>>> I'm about to migrate my nova-compute nodes from XenServer 5.6 to the
>>> new XenServer 6: do I need updated plugins?
>>> Are the instructions reported here
>>> http://wiki.openstack.org/XenServerDevelopment still a good reference?
>>> What about vlan networking? Will the default use of openswitch in XS6
>>> change something?
>>>
>>> Thanks a lot
>>> Giuseppe
>>>
>>> _______________________________________________
>>> Mailing list: https://launchpad.net/~openstack
>>> Post to     : openstack@xxxxxxxxxxxxxxxxxxx
>>> Unsubscribe : https://launchpad.net/~openstack
>>> More help   : https://help.launchpad.net/ListHelp
>> This email may include confidential information. If you received it in error, please delete it.
>>
>>
>


References