← Back to team overview

openstack team mailing list archive

Re: [OpenStack] Xen Hypervisor

 

Sorry, here is my xe network-list:
[root@localhost ~]# xe network-listuuid ( RO)                : c8f3cec4-66be-7c79-9276-b2537ee0df10          name-label ( RW): Host internal management network    name-description ( RW): Network on which guests will be assigned a private link-local IP address which can be used to talk XenAPI              bridge ( RO): xenapi

uuid ( RO)                : c25c9878-7afd-f925-08f4-ffb6f9780b69          name-label ( RW): Pool-wide network associated with eth0    name-description ( RW):              bridge ( RO): xenbr0 

From: alex_tkd@xxxxxxxx
To: john.garbutt@xxxxxxxxxx; ewan.mellor@xxxxxxxxxxxxx; openstack@xxxxxxxxxxxxxxxxxxx
Subject: RE: [Openstack] [OpenStack] Xen Hypervisor
Date: Wed, 28 Mar 2012 16:35:41 +0000







Hi,
Here is my nova.conf from Xen Nova-compute: 
http://pastebin.com/YSANE0pN 
I have just one NIC on dom0, that's a problem?
My infrastructure is (All IPs start with 192.168.100):
gateway: .1dhcp: .4controller: .139dom0: 251 (Xen)domU: 214 (Xen PV)
Also, here is my ifconfig of Dom0:
[root@localhost ~]# ifconfigeth0      Link encap:Ethernet  HWaddr 78:E7:D1:55:CD:8C          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1          RX packets:12412221 errors:0 dropped:0 overruns:0 frame:0          TX packets:6848360 errors:0 dropped:0 overruns:0 carrier:0          collisions:0 txqueuelen:1000          RX bytes:31319971 (29.8 MiB)  TX bytes:2973878456 (2.7 GiB)          Interrupt:254 Base address:0xc000
lo        Link encap:Local Loopback          inet addr:127.0.0.1  Mask:255.0.0.0          UP LOOPBACK RUNNING  MTU:16436  Metric:1          RX packets:518304 errors:0 dropped:0 overruns:0 frame:0          TX packets:518304 errors:0 dropped:0 overruns:0 carrier:0          collisions:0 txqueuelen:0          RX bytes:2746623856 (2.5 GiB)  TX bytes:2746623856 (2.5 GiB)
vif27.0   Link encap:Ethernet  HWaddr FE:FF:FF:FF:FF:FF          UP BROADCAST RUNNING NOARP PROMISC  MTU:1500  Metric:1          RX packets:1002250 errors:0 dropped:0 overruns:0 frame:0          TX packets:2337744 errors:0 dropped:169 overruns:0 carrier:0          collisions:0 txqueuelen:32          RX bytes:62195287 (59.3 MiB)  TX bytes:3463407822 (3.2 GiB)
xenbr0    Link encap:Ethernet  HWaddr 78:E7:D1:55:CD:8C          inet addr:192.168.100.251  Bcast:192.168.100.255  Mask:255.255.255.0          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1          RX packets:1156246 errors:0 dropped:0 overruns:0 frame:0          TX packets:846339 errors:0 dropped:0 overruns:0 carrier:0          collisions:0 txqueuelen:0          RX bytes:131983322 (125.8 MiB)  TX bytes:3111706166 (2.8 GiB)
and from DomU:lis@nova-controller:~$ ifconfigbr100     Link encap:Ethernet  HWaddr 78:e7:d1:72:15:ad          inet addr:10.0.0.1  Bcast:10.0.0.255  Mask:255.255.255.0          inet6 addr: fe80::f8e5:50ff:fe5e:39d4/64 Scope:Link          UP BROADCAST RUNNING PROMISC MULTICAST  MTU:1500  Metric:1          RX packets:7384108 errors:0 dropped:0 overruns:0 frame:0          TX packets:14343347 errors:0 dropped:0 overruns:0 carrier:0          collisions:0 txqueuelen:0          RX bytes:1066281639 (1.0 GB)  TX bytes:20959840761 (20.9 GB)
eth0      Link encap:Ethernet  HWaddr 78:e7:d1:72:15:ad          inet6 addr: fe80::7ae7:d1ff:fe72:15ad/64 Scope:Link          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1          RX packets:7573873 errors:0 dropped:0 overruns:0 frame:0          TX packets:14368549 errors:0 dropped:0 overruns:0 carrier:0          collisions:0 txqueuelen:1000          RX bytes:1195539149 (1.1 GB)  TX bytes:20973125370 (20.9 GB)          Interrupt:43 Base address:0x2000
lo        Link encap:Local Loopback          inet addr:127.0.0.1  Mask:255.0.0.0          inet6 addr: ::1/128 Scope:Host          UP LOOPBACK RUNNING  MTU:16436  Metric:1          RX packets:1089412 errors:0 dropped:0 overruns:0 frame:0          TX packets:1089412 errors:0 dropped:0 overruns:0 carrier:0          collisions:0 txqueuelen:0          RX bytes:654259441 (654.2 MB)  TX bytes:654259441 (654.2 MB)
From: John.Garbutt@xxxxxxxxxx
To: alex_tkd@xxxxxxxx; Ewan.Mellor@xxxxxxxxxxxxx; openstack@xxxxxxxxxxxxxxxxxxx
Date: Wed, 28 Mar 2012 16:50:45 +0100
Subject: RE: [Openstack] [OpenStack] Xen Hypervisor



Hi, That looks a lot like you have some of the networking flags not quite correct for your setup. I have attempted to describe the flags on the wiki, which might help you:http://wiki.openstack.org/XenServer/NetworkingFlags I am not 100% sure, but it looks like the following flag has the wrong value:flat_network_bridge On XenServer it should reference the bridge (or bridge name) on your XenServer that you want instance traffic to go through. It could by eth1 (if you want the traffic on your second nic to hold guest traffic) or on xapi1 (if you have setup a VLAN network bridge to carry the instance traffic). If you can tell us:·         What this returns: xe network-list·         What you want to use each of the above networks for·         And what is in your nova.confI might be able to be more specific. Hope that helps,John From: openstack-bounces+john.garbutt=eu.citrix.com@xxxxxxxxxxxxxxxxxxx [mailto:openstack-bounces+john.garbutt=eu.citrix.com@xxxxxxxxxxxxxxxxxxx] On Behalf Of Alexandre Leites
Sent: 28 March 2012 14:35
To: Ewan Mellor; openstack@xxxxxxxxxxxxxxxxxxx
Subject: Re: [Openstack] [OpenStack] Xen Hypervisor Hi, I have edited the xenhost plugin to use "host_uuid=_run_command("xe host-list --minimal").strip()" instead of original command. Now, i'm with this logs on nova-compute... Also i dont know how to trace xenapi_conn.call_plugin to find out why my host_uuid isn’t getting set, can you help me? 2012-03-28 13:21:21,074 INFO nova.virt.xenapi.vm_utils [-] (VM_UTILS) xenserver vm state -> |Running|2012-03-28 13:21:21,074 INFO nova.virt.xenapi.vm_utils [-] (VM_UTILS) xenapi power_state -> |1|2012-03-28 13:21:21,130 INFO nova.virt.xenapi.vm_utils [-] (VM_UTILS) xenserver vm state -> |Halted|2012-03-28 13:21:21,130 INFO nova.virt.xenapi.vm_utils [-] (VM_UTILS) xenapi power_state -> |4|2012-03-28 13:21:21,264 INFO nova.virt.xenapi.vm_utils [-] (VM_UTILS) xenserver vm state -> |Halted|2012-03-28 13:21:21,265 INFO nova.virt.xenapi.vm_utils [-] (VM_UTILS) xenapi power_state -> |4|2012-03-28 13:21:26,360 INFO nova.compute.manager [-] Found 4 in the database and 3 on the hypervisor.2012-03-28 13:22:14,144 DEBUG nova.virt.xenapi.vm_utils [-] Destroying VBD for VDI OpaqueRef:ba109dd2-5306-08e0-f0f8-669d7da69d39 ...  from (pid=5062) with_vdi_attached_here /usr/lib/python2.7/dist-packages/nova/virt/xenapi/vm_utils.py:10582012-03-28 13:22:15,429 DEBUG nova.virt.xenapi.vm_utils [-] VBD.unplug successful first time. from (pid=5062) vbd_unplug_with_retry /usr/lib/python2.7/dist-packages/nova/virt/xenapi/vm_utils.py:10732012-03-28 13:22:15,477 DEBUG nova.virt.xenapi.vm_utils [-] Destroying VBD for VDI OpaqueRef:ba109dd2-5306-08e0-f0f8-669d7da69d39 done. from (pid=5062) with_vdi_attached_here /usr/lib/python2.7/dist-packages/nova/virt/xenapi/vm_utils.py:10612012-03-28 13:22:17,216 DEBUG nova.virt.xenapi.vm_utils [-] Detected DISK format for image 3, instance 83 from (pid=5062) log_disk_format /usr/lib/python2.7/dist-packages/nova/virt/xenapi/vm_utils.py:6272012-03-28 13:22:17,217 DEBUG nova.virt.xenapi.vm_utils [-] Fetching image 1 from (pid=5062) _fetch_image_glance_disk /usr/lib/python2.7/dist-packages/nova/virt/xenapi/vm_utils.py:5362012-03-28 13:22:17,217 DEBUG nova.virt.xenapi.vm_utils [-] Image Type: kernel from (pid=5062) _fetch_image_glance_disk /usr/lib/python2.7/dist-packages/nova/virt/xenapi/vm_utils.py:5372012-03-28 13:22:17,344 DEBUG nova.virt.xenapi.vm_utils [-] Size for image 1:4743568 from (pid=5062) _fetch_image_glance_disk /usr/lib/python2.7/dist-packages/nova/virt/xenapi/vm_utils.py:5512012-03-28 13:22:18,450 DEBUG nova.virt.xenapi.vm_utils [-] Created VDI OpaqueRef:d4d67de4-3e4f-cd8f-3e2e-132525d115e4 (Glance image 1, 4743568, False) on OpaqueRef:0637f688-bd33-1749-fa5e-f1dbe515d1b0. from (pid=5062) create_vdi /usr/lib/python2.7/dist-packages/nova/virt/xenapi/vm_utils.py:3132012-03-28 13:22:18,476 DEBUG nova.virt.xenapi.vm_utils [-] Creating VBD for VDI OpaqueRef:d4d67de4-3e4f-cd8f-3e2e-132525d115e4 ...  from (pid=5062) with_vdi_attached_here /usr/lib/python2.7/dist-packages/nova/virt/xenapi/vm_utils.py:10382012-03-28 13:22:18,505 DEBUG nova.virt.xenapi.vm_utils [-] Creating VBD for VDI OpaqueRef:d4d67de4-3e4f-cd8f-3e2e-132525d115e4 done. from (pid=5062) with_vdi_attached_here /usr/lib/python2.7/dist-packages/nova/virt/xenapi/vm_utils.py:10402012-03-28 13:22:18,505 DEBUG nova.virt.xenapi.vm_utils [-] Plugging VBD OpaqueRef:dda505c4-c994-9cfb-5cbd-76c40029b981 ...  from (pid=5062) with_vdi_attached_here /usr/lib/python2.7/dist-packages/nova/virt/xenapi/vm_utils.py:10422012-03-28 13:22:19,646 DEBUG nova.virt.xenapi.vm_utils [-] Plugging VBD OpaqueRef:dda505c4-c994-9cfb-5cbd-76c40029b981 done. from (pid=5062) with_vdi_attached_here /usr/lib/python2.7/dist-packages/nova/virt/xenapi/vm_utils.py:10442012-03-28 13:22:19,656 DEBUG nova.virt.xenapi.vm_utils [-] VBD OpaqueRef:dda505c4-c994-9cfb-5cbd-76c40029b981 plugged as xvdb from (pid=5062) with_vdi_attached_here /usr/lib/python2.7/dist-packages/nova/virt/xenapi/vm_utils.py:10462012-03-28 13:22:19,657 DEBUG nova.utils [-] Running cmd (subprocess): sudo chown 107 /dev/xvdb from (pid=5062) execute /usr/lib/python2.7/dist-packages/nova/utils.py:1682012-03-28 13:22:21,352 DEBUG nova.virt.xenapi.vm_utils [-] Destroying VBD for VDI OpaqueRef:d4d67de4-3e4f-cd8f-3e2e-132525d115e4 ...  from (pid=5062) with_vdi_attached_here /usr/lib/python2.7/dist-packages/nova/virt/xenapi/vm_utils.py:10582012-03-28 13:22:21,398 DEBUG nova.virt.xenapi.vm_utils [-] VBD.unplug rejected: retrying... from (pid=5062) vbd_unplug_with_retry /usr/lib/python2.7/dist-packages/nova/virt/xenapi/vm_utils.py:10782012-03-28 13:22:22,404 DEBUG nova.virt.xenapi.vm_utils [-] Not sleeping anymore! from (pid=5062) vbd_unplug_with_retry /usr/lib/python2.7/dist-packages/nova/virt/xenapi/vm_utils.py:10802012-03-28 13:22:22,581 DEBUG nova.virt.xenapi.vm_utils [-] VBD.unplug successful eventually. from (pid=5062) vbd_unplug_with_retry /usr/lib/python2.7/dist-packages/nova/virt/xenapi/vm_utils.py:10832012-03-28 13:22:22,621 DEBUG nova.virt.xenapi.vm_utils [-] Destroying VBD for VDI OpaqueRef:d4d67de4-3e4f-cd8f-3e2e-132525d115e4 done. from (pid=5062) with_vdi_attached_here /usr/lib/python2.7/dist-packages/nova/virt/xenapi/vm_utils.py:10612012-03-28 13:22:22,622 DEBUG nova.virt.xenapi.vm_utils [-] Copying VDI OpaqueRef:d4d67de4-3e4f-cd8f-3e2e-132525d115e4 to /boot/guest on dom0 from (pid=5062) _fetch_image_glance_disk /usr/lib/python2.7/dist-packages/nova/virt/xenapi/vm_utils.py:5762012-03-28 13:22:26,361 DEBUG nova.manager [-] Notifying Schedulers of capabilities ... from (pid=5062) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:1112012-03-28 13:22:26,361 DEBUG nova.rpc [-] Making asynchronous fanout cast... from (pid=5062) fanout_cast /usr/lib/python2.7/dist-packages/nova/rpc/impl_kombu.py:7552012-03-28 13:22:26,670 INFO nova.virt.xenapi.vm_utils [-] (VM_UTILS) xenserver vm state -> |Running|2012-03-28 13:22:26,670 INFO nova.virt.xenapi.vm_utils [-] (VM_UTILS) xenapi power_state -> |1|2012-03-28 13:22:26,738 INFO nova.virt.xenapi.vm_utils [-] (VM_UTILS) xenserver vm state -> |Halted|2012-03-28 13:22:26,739 INFO nova.virt.xenapi.vm_utils [-] (VM_UTILS) xenapi power_state -> |4|2012-03-28 13:22:26,966 ERROR nova [-] in looping call(nova): TRACE: Traceback (most recent call last):(nova): TRACE:   File "/usr/lib/python2.7/dist-packages/nova/utils.py", line 491, in _inner(nova): TRACE:     self.f(*self.args, **self.kw)(nova): TRACE:   File "/usr/lib/python2.7/dist-packages/nova/virt/xenapi_conn.py", line 408, in _poll_task(nova): TRACE:     name = self._session.xenapi.task.get_name_label(task)(nova): TRACE:   File "/usr/local/lib/python2.7/dist-packages/XenAPI.py", line 229, in __call__(nova): TRACE:     return self.__send(self.__name, args)(nova): TRACE:   File "/usr/local/lib/python2.7/dist-packages/XenAPI.py", line 133, in xenapi_request(nova): TRACE:     result = _parse_result(getattr(self, methodname)(*full_params))(nova): TRACE:   File "/usr/lib/python2.7/xmlrpclib.py", line 1224, in __call__(nova): TRACE:     return self.__send(self.__name, args)(nova): TRACE:   File "/usr/lib/python2.7/xmlrpclib.py", line 1575, in __request(nova): TRACE:     verbose=self.__verbose(nova): TRACE:   File "/usr/lib/python2.7/xmlrpclib.py", line 1264, in request(nova): TRACE:     return self.single_request(host, handler, request_body, verbose)(nova): TRACE:   File "/usr/lib/python2.7/xmlrpclib.py", line 1289, in single_request(nova): TRACE:     self.send_request(h, handler, request_body)(nova): TRACE:   File "/usr/lib/python2.7/xmlrpclib.py", line 1391, in send_request(nova): TRACE:     connection.putrequest("POST", handler, skip_accept_encoding=True)(nova): TRACE:   File "/usr/lib/python2.7/httplib.py", line 853, in putrequest(nova): TRACE:     raise CannotSendRequest()(nova): TRACE: CannotSendRequest(nova): TRACE: 2012-03-28 13:22:27,156 INFO nova.virt.xenapi.vm_utils [-] (VM_UTILS) xenserver vm state -> |Halted|2012-03-28 13:22:27,157 INFO nova.virt.xenapi.vm_utils [-] (VM_UTILS) xenapi power_state -> |4|2012-03-28 13:22:28,474 INFO nova.compute.manager [-] Found 4 in the database and 3 on the hypervisor.2012-03-28 13:23:28,475 DEBUG nova.manager [-] Notifying Schedulers of capabilities ... from (pid=5062) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:1112012-03-28 13:23:28,476 DEBUG nova.rpc [-] Making asynchronous fanout cast... from (pid=5062) fanout_cast /usr/lib/python2.7/dist-packages/nova/rpc/impl_kombu.py:7552012-03-28 13:23:28,641 INFO nova.compute.manager [-] Updating host status2012-03-28 13:23:28,641 DEBUG nova.virt.xenapi [-] Updating host stats from (pid=5062) update_status /usr/lib/python2.7/dist-packages/nova/virt/xenapi_conn.py:4882012-03-28 13:23:28,951 INFO nova.virt.xenapi [-] Task [Async.host.call_plugin] OpaqueRef:c45bce0b-a83b-e2dd-7202-e0f7c9cf91fd status: success    <value>{&quot;host_name-description&quot;: &quot;XenServer 5.6&quot;, &quot;host_hostname&quot;: &quot;localhost&quot;, &quot;host_memory&quot;: {&quot;total&quot;: 4157792256, &quot;overhead&quot;: 127926272, &quot;free&quot;: 3445649408, &quot;free-computed&quot;: 3425886208}, &quot;enabled&quot;: &quot;true&quot;, &quot;host_other-config&quot;: {&quot;iscsi_iqn&quot;: &quot;iqn.2012-03.com.example:3ca16415&quot;, &quot;MAINTENANCE_MODE_EVACUATED_VMS_MIGRATED&quot;: &quot;&quot;, &quot;MAINTENANCE_MODE_EVACUATED_VMS_HALTED&quot;: &quot;&quot;, &quot;agent_start_time&quot;: &quot;1332863945.&quot;, &quot;boot_time&quot;: &quot;1332852695.&quot;, &quot;MAINTENANCE_MODE_EVACUATED_VMS_SUSPENDED&quot;: &quot;&quot;}, &quot;host_ip_address&quot;: &quot;192.168.100.251&quot;, &quot;host_cpu_info&quot;: {&quot;physical_features&quot;: &quot;0408e3bd-bfebfbff-00000001-20100800&quot;, &quot;modelname&quot;: &quot;Intel(R) Core(TM)2 Duo CPU     E7500  @ 2.93GHz&quot;, &quot;vendor&quot;: &quot;GenuineIntel&quot;, &quot;features&quot;: &quot;0408e3bd-bfebfbff-00000001-20100800&quot;, &quot;family&quot;: 6, &quot;maskable&quot;: &quot;base&quot;, &quot;cpu_count&quot;: 2, &quot;flags&quot;: &quot;fpu de tsc msr pae mce cx8 apic sep mtrr mca cmov pat clflush acpi mmx fxsr sse sse2 ss ht nx constant_tsc aperfmperf pni vmx est ssse3 sse4_1 hypervisor tpr_shadow vnmi flexpriority&quot;, &quot;stepping&quot;: 10, &quot;model&quot;: 23, &quot;features_after_reboot&quot;: &quot;0408e3bd-bfebfbff-00000001-20100800&quot;, &quot;speed&quot;: &quot;2926.058&quot;}, &quot;host_uuid&quot;: &quot;bb308c1f-a55c-4c32-8089-cf6f2c089a52&quot;, &quot;host_name-label&quot;: &quot;XenServer 5.6&quot;}</value>2012-03-28 13:23:29,257 INFO nova.virt.xenapi.vm_utils [-] (VM_UTILS) xenserver vm state -> |Running|2012-03-28 13:23:29,258 INFO nova.virt.xenapi.vm_utils [-] (VM_UTILS) xenapi power_state -> |1|2012-03-28 13:23:29,353 INFO nova.virt.xenapi.vm_utils [-] (VM_UTILS) xenserver vm state -> |Halted|2012-03-28 13:23:29,353 INFO nova.virt.xenapi.vm_utils [-] (VM_UTILS) xenapi power_state -> |4|2012-03-28 13:23:29,478 INFO nova.virt.xenapi.vm_utils [-] (VM_UTILS) xenserver vm state -> |Halted|2012-03-28 13:23:29,478 INFO nova.virt.xenapi.vm_utils [-] (VM_UTILS) xenapi power_state -> |4|2012-03-28 13:23:32,765 INFO nova.compute.manager [-] Found 4 in the database and 3 on the hypervisor.2012-03-28 13:24:32,767 DEBUG nova.manager [-] Notifying Schedulers of capabilities ... from (pid=5062) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:1112012-03-28 13:24:32,784 DEBUG nova.rpc [-] Making asynchronous fanout cast... from (pid=5062) fanout_cast /usr/lib/python2.7/dist-packages/nova/rpc/impl_kombu.py:7552012-03-28 13:24:33,985 INFO nova.virt.xenapi.vm_utils [-] (VM_UTILS) xenserver vm state -> |Running|2012-03-28 13:24:33,986 INFO nova.virt.xenapi.vm_utils [-] (VM_UTILS) xenapi power_state -> |1|2012-03-28 13:24:34,761 INFO nova.virt.xenapi.vm_utils [-] (VM_UTILS) xenserver vm state -> |Halted|2012-03-28 13:24:34,761 INFO nova.virt.xenapi.vm_utils [-] (VM_UTILS) xenapi power_state -> |4|2012-03-28 13:24:34,961 INFO nova.virt.xenapi.vm_utils [-] (VM_UTILS) xenserver vm state -> |Halted|2012-03-28 13:24:34,961 INFO nova.virt.xenapi.vm_utils [-] (VM_UTILS) xenapi power_state -> |4|2012-03-28 13:24:36,275 INFO nova.compute.manager [-] Found 4 in the database and 3 on the hypervisor.2012-03-28 13:25:36,356 DEBUG nova.manager [-] Notifying Schedulers of capabilities ... from (pid=5062) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:1112012-03-28 13:25:36,357 DEBUG nova.rpc [-] Making asynchronous fanout cast... from (pid=5062) fanout_cast /usr/lib/python2.7/dist-packages/nova/rpc/impl_kombu.py:7552012-03-28 13:25:36,466 INFO nova.compute.manager [-] Updating host status2012-03-28 13:25:36,467 DEBUG nova.virt.xenapi [-] Updating host stats from (pid=5062) update_status /usr/lib/python2.7/dist-packages/nova/virt/xenapi_conn.py:4882012-03-28 13:25:37,019 INFO nova.virt.xenapi [-] Task [Async.host.call_plugin] OpaqueRef:aa843929-9afc-b5c1-9918-29d24965b7ab status: success    <value>{&quot;host_name-description&quot;: &quot;XenServer 5.6&quot;, &quot;host_hostname&quot;: &quot;localhost&quot;, &quot;host_memory&quot;: {&quot;total&quot;: 4157792256, &quot;overhead&quot;: 127926272, &quot;free&quot;: 3445649408, &quot;free-computed&quot;: 3425886208}, &quot;enabled&quot;: &quot;true&quot;, &quot;host_other-config&quot;: {&quot;iscsi_iqn&quot;: &quot;iqn.2012-03.com.example:3ca16415&quot;, &quot;MAINTENANCE_MODE_EVACUATED_VMS_MIGRATED&quot;: &quot;&quot;, &quot;MAINTENANCE_MODE_EVACUATED_VMS_HALTED&quot;: &quot;&quot;, &quot;agent_start_time&quot;: &quot;1332863945.&quot;, &quot;boot_time&quot;: &quot;1332852695.&quot;, &quot;MAINTENANCE_MODE_EVACUATED_VMS_SUSPENDED&quot;: &quot;&quot;}, &quot;host_ip_address&quot;: &quot;192.168.100.251&quot;, &quot;host_cpu_info&quot;: {&quot;physical_features&quot;: &quot;0408e3bd-bfebfbff-00000001-20100800&quot;, &quot;modelname&quot;: &quot;Intel(R) Core(TM)2 Duo CPU     E7500  @ 2.93GHz&quot;, &quot;vendor&quot;: &quot;GenuineIntel&quot;, &quot;features&quot;: &quot;0408e3bd-bfebfbff-00000001-20100800&quot;, &quot;family&quot;: 6, &quot;maskable&quot;: &quot;base&quot;, &quot;cpu_count&quot;: 2, &quot;flags&quot;: &quot;fpu de tsc msr pae mce cx8 apic sep mtrr mca cmov pat clflush acpi mmx fxsr sse sse2 ss ht nx constant_tsc aperfmperf pni vmx est ssse3 sse4_1 hypervisor tpr_shadow vnmi flexpriority&quot;, &quot;stepping&quot;: 10, &quot;model&quot;: 23, &quot;features_after_reboot&quot;: &quot;0408e3bd-bfebfbff-00000001-20100800&quot;, &quot;speed&quot;: &quot;2926.058&quot;}, &quot;host_uuid&quot;: &quot;bb308c1f-a55c-4c32-8089-cf6f2c089a52&quot;, &quot;host_name-label&quot;: &quot;XenServer 5.6&quot;}</value>2012-03-28 13:25:37,398 INFO nova.virt.xenapi.vm_utils [-] (VM_UTILS) xenserver vm state -> |Running|2012-03-28 13:25:37,399 INFO nova.virt.xenapi.vm_utils [-] (VM_UTILS) xenapi power_state -> |1|2012-03-28 13:25:37,481 INFO nova.virt.xenapi.vm_utils [-] (VM_UTILS) xenserver vm state -> |Halted|2012-03-28 13:25:37,481 INFO nova.virt.xenapi.vm_utils [-] (VM_UTILS) xenapi power_state -> |4|2012-03-28 13:25:37,687 INFO nova.virt.xenapi.vm_utils [-] (VM_UTILS) xenserver vm state -> |Halted|2012-03-28 13:25:37,687 INFO nova.virt.xenapi.vm_utils [-] (VM_UTILS) xenapi power_state -> |4|2012-03-28 13:25:40,555 INFO nova.compute.manager [-] Found 4 in the database and 3 on the hypervisor.2012-03-28 13:26:40,595 DEBUG nova.manager [-] Notifying Schedulers of capabilities ... from (pid=5062) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:1112012-03-28 13:26:40,596 DEBUG nova.rpc [-] Making asynchronous fanout cast... from (pid=5062) fanout_cast /usr/lib/python2.7/dist-packages/nova/rpc/impl_kombu.py:7552012-03-28 13:26:40,832 INFO nova.virt.xenapi.vm_utils [-] (VM_UTILS) xenserver vm state -> |Running|2012-03-28 13:26:40,832 INFO nova.virt.xenapi.vm_utils [-] (VM_UTILS) xenapi power_state -> |1|2012-03-28 13:26:40,888 INFO nova.virt.xenapi.vm_utils [-] (VM_UTILS) xenserver vm state -> |Halted|2012-03-28 13:26:40,888 INFO nova.virt.xenapi.vm_utils [-] (VM_UTILS) xenapi power_state -> |4|2012-03-28 13:26:41,095 INFO nova.virt.xenapi.vm_utils [-] (VM_UTILS) xenserver vm state -> |Halted|2012-03-28 13:26:41,095 INFO nova.virt.xenapi.vm_utils [-] (VM_UTILS) xenapi power_state -> |4|2012-03-28 13:26:42,818 INFO nova.compute.manager [-] Found 4 in the database and 3 on the hypervisor.2012-03-28 13:27:42,819 DEBUG nova.manager [-] Notifying Schedulers of capabilities ... from (pid=5062) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:1112012-03-28 13:27:42,834 DEBUG nova.rpc [-] Making asynchronous fanout cast... from (pid=5062) fanout_cast /usr/lib/python2.7/dist-packages/nova/rpc/impl_kombu.py:7552012-03-28 13:27:42,934 INFO nova.compute.manager [-] Updating host status2012-03-28 13:27:42,935 DEBUG nova.virt.xenapi [-] Updating host stats from (pid=5062) update_status /usr/lib/python2.7/dist-packages/nova/virt/xenapi_conn.py:4882012-03-28 13:27:43,224 INFO nova.virt.xenapi [-] Task [Async.host.call_plugin] OpaqueRef:399453cd-d1b0-41b5-d651-028f04e85d46 status: success    <value>{&quot;host_name-description&quot;: &quot;XenServer 5.6&quot;, &quot;host_hostname&quot;: &quot;localhost&quot;, &quot;host_memory&quot;: {&quot;total&quot;: 4157792256, &quot;overhead&quot;: 127926272, &quot;free&quot;: 3445649408, &quot;free-computed&quot;: 3425886208}, &quot;enabled&quot;: &quot;true&quot;, &quot;host_other-config&quot;: {&quot;iscsi_iqn&quot;: &quot;iqn.2012-03.com.example:3ca16415&quot;, &quot;MAINTENANCE_MODE_EVACUATED_VMS_MIGRATED&quot;: &quot;&quot;, &quot;MAINTENANCE_MODE_EVACUATED_VMS_HALTED&quot;: &quot;&quot;, &quot;agent_start_time&quot;: &quot;1332863945.&quot;, &quot;boot_time&quot;: &quot;1332852695.&quot;, &quot;MAINTENANCE_MODE_EVACUATED_VMS_SUSPENDED&quot;: &quot;&quot;}, &quot;host_ip_address&quot;: &quot;192.168.100.251&quot;, &quot;host_cpu_info&quot;: {&quot;physical_features&quot;: &quot;0408e3bd-bfebfbff-00000001-20100800&quot;, &quot;modelname&quot;: &quot;Intel(R) Core(TM)2 Duo CPU     E7500  @ 2.93GHz&quot;, &quot;vendor&quot;: &quot;GenuineIntel&quot;, &quot;features&quot;: &quot;0408e3bd-bfebfbff-00000001-20100800&quot;, &quot;family&quot;: 6, &quot;maskable&quot;: &quot;base&quot;, &quot;cpu_count&quot;: 2, &quot;flags&quot;: &quot;fpu de tsc msr pae mce cx8 apic sep mtrr mca cmov pat clflush acpi mmx fxsr sse sse2 ss ht nx constant_tsc aperfmperf pni vmx est ssse3 sse4_1 hypervisor tpr_shadow vnmi flexpriority&quot;, &quot;stepping&quot;: 10, &quot;model&quot;: 23, &quot;features_after_reboot&quot;: &quot;0408e3bd-bfebfbff-00000001-20100800&quot;, &quot;speed&quot;: &quot;2926.058&quot;}, &quot;host_uuid&quot;: &quot;bb308c1f-a55c-4c32-8089-cf6f2c089a52&quot;, &quot;host_name-label&quot;: &quot;XenServer 5.6&quot;}</value>2012-03-28 13:27:43,426 INFO nova.virt.xenapi.vm_utils [-] (VM_UTILS) xenserver vm state -> |Running|2012-03-28 13:27:43,427 INFO nova.virt.xenapi.vm_utils [-] (VM_UTILS) xenapi power_state -> |1|2012-03-28 13:27:43,521 INFO nova.virt.xenapi.vm_utils [-] (VM_UTILS) xenserver vm state -> |Halted|2012-03-28 13:27:43,521 INFO nova.virt.xenapi.vm_utils [-] (VM_UTILS) xenapi power_state -> |4|2012-03-28 13:27:43,692 INFO nova.virt.xenapi.vm_utils [-] (VM_UTILS) xenserver vm state -> |Halted|2012-03-28 13:27:43,692 INFO nova.virt.xenapi.vm_utils [-] (VM_UTILS) xenapi power_state -> |4|2012-03-28 13:27:44,909 INFO nova.compute.manager [-] Found 4 in the database and 3 on the hypervisor.2012-03-28 13:28:31,267 DEBUG nova.virt.xenapi.vm_utils [-] Destroying VBD for VDI OpaqueRef:47620540-ea11-38bf-a681-7b60ce109aab ...  from (pid=5062) with_vdi_attached_here /usr/lib/python2.7/dist-packages/nova/virt/xenapi/vm_utils.py:10582012-03-28 13:28:32,684 DEBUG nova.virt.xenapi.vm_utils [-] VBD.unplug successful first time. from (pid=5062) vbd_unplug_with_retry /usr/lib/python2.7/dist-packages/nova/virt/xenapi/vm_utils.py:10732012-03-28 13:28:32,707 DEBUG nova.virt.xenapi.vm_utils [-] Destroying VBD for VDI OpaqueRef:47620540-ea11-38bf-a681-7b60ce109aab done. from (pid=5062) with_vdi_attached_here /usr/lib/python2.7/dist-packages/nova/virt/xenapi/vm_utils.py:10612012-03-28 13:28:32,796 DEBUG nova.virt.xenapi.vm_utils [-] Detected DISK format for image 3, instance 85 from (pid=5062) log_disk_format /usr/lib/python2.7/dist-packages/nova/virt/xenapi/vm_utils.py:6272012-03-28 13:28:32,797 DEBUG nova.virt.xenapi.vm_utils [-] Fetching image 1 from (pid=5062) _fetch_image_glance_disk /usr/lib/python2.7/dist-packages/nova/virt/xenapi/vm_utils.py:5362012-03-28 13:28:32,797 DEBUG nova.virt.xenapi.vm_utils [-] Image Type: kernel from (pid=5062) _fetch_image_glance_disk /usr/lib/python2.7/dist-packages/nova/virt/xenapi/vm_utils.py:5372012-03-28 13:28:32,875 DEBUG nova.virt.xenapi.vm_utils [-] Size for image 1:4743568 from (pid=5062) _fetch_image_glance_disk /usr/lib/python2.7/dist-packages/nova/virt/xenapi/vm_utils.py:5512012-03-28 13:28:33,741 DEBUG nova.virt.xenapi.vm_utils [-] Created VDI OpaqueRef:f745ed28-ed42-4027-dbbd-98cb29b0525e (Glance image 1, 4743568, False) on OpaqueRef:0637f688-bd33-1749-fa5e-f1dbe515d1b0. from (pid=5062) create_vdi /usr/lib/python2.7/dist-packages/nova/virt/xenapi/vm_utils.py:3132012-03-28 13:28:33,782 DEBUG nova.virt.xenapi.vm_utils [-] Creating VBD for VDI OpaqueRef:f745ed28-ed42-4027-dbbd-98cb29b0525e ...  from (pid=5062) with_vdi_attached_here /usr/lib/python2.7/dist-packages/nova/virt/xenapi/vm_utils.py:10382012-03-28 13:28:33,837 DEBUG nova.virt.xenapi.vm_utils [-] Creating VBD for VDI OpaqueRef:f745ed28-ed42-4027-dbbd-98cb29b0525e done. from (pid=5062) with_vdi_attached_here /usr/lib/python2.7/dist-packages/nova/virt/xenapi/vm_utils.py:10402012-03-28 13:28:33,838 DEBUG nova.virt.xenapi.vm_utils [-] Plugging VBD OpaqueRef:4fdeeabd-e305-229b-009b-2c7b1a33d4b8 ...  from (pid=5062) with_vdi_attached_here /usr/lib/python2.7/dist-packages/nova/virt/xenapi/vm_utils.py:10422012-03-28 13:28:35,095 DEBUG nova.virt.xenapi.vm_utils [-] Plugging VBD OpaqueRef:4fdeeabd-e305-229b-009b-2c7b1a33d4b8 done. from (pid=5062) with_vdi_attached_here /usr/lib/python2.7/dist-packages/nova/virt/xenapi/vm_utils.py:10442012-03-28 13:28:35,122 DEBUG nova.virt.xenapi.vm_utils [-] VBD OpaqueRef:4fdeeabd-e305-229b-009b-2c7b1a33d4b8 plugged as xvdb from (pid=5062) with_vdi_attached_here /usr/lib/python2.7/dist-packages/nova/virt/xenapi/vm_utils.py:10462012-03-28 13:28:35,122 DEBUG nova.utils [-] Running cmd (subprocess): sudo chown 107 /dev/xvdb from (pid=5062) execute /usr/lib/python2.7/dist-packages/nova/utils.py:1682012-03-28 13:28:36,635 DEBUG nova.virt.xenapi.vm_utils [-] Destroying VBD for VDI OpaqueRef:f745ed28-ed42-4027-dbbd-98cb29b0525e ...  from (pid=5062) with_vdi_attached_here /usr/lib/python2.7/dist-packages/nova/virt/xenapi/vm_utils.py:10582012-03-28 13:28:36,682 DEBUG nova.virt.xenapi.vm_utils [-] VBD.unplug rejected: retrying... from (pid=5062) vbd_unplug_with_retry /usr/lib/python2.7/dist-packages/nova/virt/xenapi/vm_utils.py:10782012-03-28 13:28:37,730 DEBUG nova.virt.xenapi.vm_utils [-] Not sleeping anymore! from (pid=5062) vbd_unplug_with_retry /usr/lib/python2.7/dist-packages/nova/virt/xenapi/vm_utils.py:10802012-03-28 13:28:37,965 DEBUG nova.virt.xenapi.vm_utils [-] VBD.unplug successful eventually. from (pid=5062) vbd_unplug_with_retry /usr/lib/python2.7/dist-packages/nova/virt/xenapi/vm_utils.py:10832012-03-28 13:28:38,013 DEBUG nova.virt.xenapi.vm_utils [-] Destroying VBD for VDI OpaqueRef:f745ed28-ed42-4027-dbbd-98cb29b0525e done. from (pid=5062) with_vdi_attached_here /usr/lib/python2.7/dist-packages/nova/virt/xenapi/vm_utils.py:10612012-03-28 13:28:38,013 DEBUG nova.virt.xenapi.vm_utils [-] Copying VDI OpaqueRef:f745ed28-ed42-4027-dbbd-98cb29b0525e to /boot/guest on dom0 from (pid=5062) _fetch_image_glance_disk /usr/lib/python2.7/dist-packages/nova/virt/xenapi/vm_utils.py:5762012-03-28 13:28:42,732 INFO nova.virt.xenapi [-] Task [Async.host.call_plugin] OpaqueRef:36d59d77-6f0e-900a-9dc8-4dfc4503b125 status: success    <value>/boot/guest/0225cdb5-7d98-4e01-99ee-72271a1d738e</value>2012-03-28 13:28:43,492 DEBUG nova.virt.xenapi.vm_utils [-] Kernel/Ramdisk VDI OpaqueRef:f745ed28-ed42-4027-dbbd-98cb29b0525e destroyed from (pid=5062) _fetch_image_glance_disk /usr/lib/python2.7/dist-packages/nova/virt/xenapi/vm_utils.py:5862012-03-28 13:28:43,493 DEBUG nova.virt.xenapi.vm_utils [-] Fetching image 2 from (pid=5062) _fetch_image_glance_disk /usr/lib/python2.7/dist-packages/nova/virt/xenapi/vm_utils.py:5362012-03-28 13:28:43,493 DEBUG nova.virt.xenapi.vm_utils [-] Image Type: ramdisk from (pid=5062) _fetch_image_glance_disk /usr/lib/python2.7/dist-packages/nova/virt/xenapi/vm_utils.py:5372012-03-28 13:28:43,603 DEBUG nova.virt.xenapi.vm_utils [-] Size for image 2:13639543 from (pid=5062) _fetch_image_glance_disk /usr/lib/python2.7/dist-packages/nova/virt/xenapi/vm_utils.py:5512012-03-28 13:28:44,910 DEBUG nova.manager [-] Notifying Schedulers of capabilities ... from (pid=5062) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:1112012-03-28 13:28:44,910 DEBUG nova.rpc [-] Making asynchronous fanout cast... from (pid=5062) fanout_cast /usr/lib/python2.7/dist-packages/nova/rpc/impl_kombu.py:7552012-03-28 13:28:44,969 WARNING nova.compute.manager [-] Error during power_state sync: 2012-03-28 13:28:45,021 DEBUG nova.virt.xenapi.vm_utils [-] Created VDI OpaqueRef:5ed933e3-7d17-05e6-5326-e84249b0df73 (Glance image 2, 13639543, False) on OpaqueRef:0637f688-bd33-1749-fa5e-f1dbe515d1b0. from (pid=5062) create_vdi /usr/lib/python2.7/dist-packages/nova/virt/xenapi/vm_utils.py:3132012-03-28 13:28:45,060 DEBUG nova.virt.xenapi.vm_utils [-] Creating VBD for VDI OpaqueRef:5ed933e3-7d17-05e6-5326-e84249b0df73 ...  from (pid=5062) with_vdi_attached_here /usr/lib/python2.7/dist-packages/nova/virt/xenapi/vm_utils.py:10382012-03-28 13:28:45,090 DEBUG nova.virt.xenapi.vm_utils [-] Creating VBD for VDI OpaqueRef:5ed933e3-7d17-05e6-5326-e84249b0df73 done. from (pid=5062) with_vdi_attached_here /usr/lib/python2.7/dist-packages/nova/virt/xenapi/vm_utils.py:10402012-03-28 13:28:45,091 DEBUG nova.virt.xenapi.vm_utils [-] Plugging VBD OpaqueRef:52ab6c7d-b1c6-6ec7-c5a6-faa6d2562283 ...  from (pid=5062) with_vdi_attached_here /usr/lib/python2.7/dist-packages/nova/virt/xenapi/vm_utils.py:10422012-03-28 13:28:46,980 DEBUG nova.virt.xenapi.vm_utils [-] Plugging VBD OpaqueRef:52ab6c7d-b1c6-6ec7-c5a6-faa6d2562283 done. from (pid=5062) with_vdi_attached_here /usr/lib/python2.7/dist-packages/nova/virt/xenapi/vm_utils.py:10442012-03-28 13:28:46,987 DEBUG nova.virt.xenapi.vm_utils [-] VBD OpaqueRef:52ab6c7d-b1c6-6ec7-c5a6-faa6d2562283 plugged as xvdb from (pid=5062) with_vdi_attached_here /usr/lib/python2.7/dist-packages/nova/virt/xenapi/vm_utils.py:10462012-03-28 13:28:46,988 DEBUG nova.utils [-] Running cmd (subprocess): sudo chown 107 /dev/xvdb from (pid=5062) execute /usr/lib/python2.7/dist-packages/nova/utils.py:1682012-03-28 13:28:51,255 DEBUG nova.virt.xenapi.vm_utils [-] Destroying VBD for VDI OpaqueRef:5ed933e3-7d17-05e6-5326-e84249b0df73 ...  from (pid=5062) with_vdi_attached_here /usr/lib/python2.7/dist-packages/nova/virt/xenapi/vm_utils.py:10582012-03-28 13:28:51,299 DEBUG nova.virt.xenapi.vm_utils [-] VBD.unplug rejected: retrying... from (pid=5062) vbd_unplug_with_retry /usr/lib/python2.7/dist-packages/nova/virt/xenapi/vm_utils.py:10782012-03-28 13:28:52,300 DEBUG nova.virt.xenapi.vm_utils [-] Not sleeping anymore! from (pid=5062) vbd_unplug_with_retry /usr/lib/python2.7/dist-packages/nova/virt/xenapi/vm_utils.py:10802012-03-28 13:28:52,449 DEBUG nova.virt.xenapi.vm_utils [-] VBD.unplug successful eventually. from (pid=5062) vbd_unplug_with_retry /usr/lib/python2.7/dist-packages/nova/virt/xenapi/vm_utils.py:10832012-03-28 13:28:52,499 DEBUG nova.virt.xenapi.vm_utils [-] Destroying VBD for VDI OpaqueRef:5ed933e3-7d17-05e6-5326-e84249b0df73 done. from (pid=5062) with_vdi_attached_here /usr/lib/python2.7/dist-packages/nova/virt/xenapi/vm_utils.py:10612012-03-28 13:28:52,500 DEBUG nova.virt.xenapi.vm_utils [-] Copying VDI OpaqueRef:5ed933e3-7d17-05e6-5326-e84249b0df73 to /boot/guest on dom0 from (pid=5062) _fetch_image_glance_disk /usr/lib/python2.7/dist-packages/nova/virt/xenapi/vm_utils.py:5762012-03-28 13:28:56,740 INFO nova.virt.xenapi [-] Task [Async.host.call_plugin] OpaqueRef:42cc2b18-2517-c969-e336-7bea6b77258a status: success    <value>/boot/guest/a74ec540-0ba4-4c2e-806c-195eb4ba38bc</value>2012-03-28 13:28:57,628 DEBUG nova.virt.xenapi.vm_utils [-] Kernel/Ramdisk VDI OpaqueRef:5ed933e3-7d17-05e6-5326-e84249b0df73 destroyed from (pid=5062) _fetch_image_glance_disk /usr/lib/python2.7/dist-packages/nova/virt/xenapi/vm_utils.py:5862012-03-28 13:28:57,637 DEBUG nova.virt.xenapi.vm_utils [-] Looking up vdi OpaqueRef:47620540-ea11-38bf-a681-7b60ce109aab for PV kernel from (pid=5062) determine_is_pv /usr/lib/python2.7/dist-packages/nova/virt/xenapi/vm_utils.py:6752012-03-28 13:28:58,874 DEBUG nova.virt.xenapi.vm_utils [-] Created VM instance-00000055... from (pid=5062) create_vm /usr/lib/python2.7/dist-packages/nova/virt/xenapi/vm_utils.py:1882012-03-28 13:28:58,941 DEBUG nova.virt.xenapi.vm_utils [-] Created VM instance-00000055 as OpaqueRef:7b7dc881-b5f2-00e2-a278-13fa82db982b. from (pid=5062) create_vm /usr/lib/python2.7/dist-packages/nova/virt/xenapi/vm_utils.py:1912012-03-28 13:28:58,941 DEBUG nova.virt.xenapi.vm_utils [-] Creating VBD for VM OpaqueRef:7b7dc881-b5f2-00e2-a278-13fa82db982b, VDI OpaqueRef:47620540-ea11-38bf-a681-7b60ce109aab ...  from (pid=5062) create_vbd /usr/lib/python2.7/dist-packages/nova/virt/xenapi/vm_utils.py:2232012-03-28 13:28:58,968 DEBUG nova.virt.xenapi.vm_utils [-] Created VBD OpaqueRef:40354759-6ebb-f0e5-1362-74363c3e34a9 for VM OpaqueRef:7b7dc881-b5f2-00e2-a278-13fa82db982b, VDI OpaqueRef:47620540-ea11-38bf-a681-7b60ce109aab. from (pid=5062) create_vbd /usr/lib/python2.7/dist-packages/nova/virt/xenapi/vm_utils.py:2262012-03-28 13:28:58,969 DEBUG nova [-] creating vif(s) for vm: |OpaqueRef:7b7dc881-b5f2-00e2-a278-13fa82db982b| from (pid=5062) create_vifs /usr/lib/python2.7/dist-packages/nova/virt/xenapi/vmops.py:11282012-03-28 13:28:59,037 ERROR nova.compute.manager [-] Instance '85' failed to spawn. Is virtualization enabled in the BIOS? Details: Found no network for bridge br100(nova.compute.manager): TRACE: Traceback (most recent call last):(nova.compute.manager): TRACE:   File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 424, in _run_instance(nova.compute.manager): TRACE:     network_info, block_device_info)(nova.compute.manager): TRACE:   File "/usr/lib/python2.7/dist-packages/nova/virt/xenapi_conn.py", line 190, in spawn(nova.compute.manager): TRACE:     self._vmops.spawn(context, instance, network_info)(nova.compute.manager): TRACE:   File "/usr/lib/python2.7/dist-packages/nova/virt/xenapi/vmops.py", line 149, in spawn(nova.compute.manager): TRACE:     vm_ref = self._create_vm(context, instance, vdis, network_info)(nova.compute.manager): TRACE:   File "/usr/lib/python2.7/dist-packages/nova/virt/xenapi/vmops.py", line 254, in _create_vm(nova.compute.manager): TRACE:     self.create_vifs(vm_ref, instance, network_info)(nova.compute.manager): TRACE:   File "/usr/lib/python2.7/dist-packages/nova/virt/xenapi/vmops.py", line 1135, in create_vifs(nova.compute.manager): TRACE:     vm_ref, instance, device, network, info)(nova.compute.manager): TRACE:   File "/usr/lib/python2.7/dist-packages/nova/virt/xenapi/vif.py", line 43, in plug(nova.compute.manager): TRACE:     xenapi_session, network['bridge'])(nova.compute.manager): TRACE:   File "/usr/lib/python2.7/dist-packages/nova/virt/xenapi/network_utils.py", line 58, in find_network_with_bridge(nova.compute.manager): TRACE:     raise Exception(_('Found no network for bridge %s') % bridge)(nova.compute.manager): TRACE: Exception: Found no network for bridge br100(nova.compute.manager): TRACE:  From: Ewan.Mellor@xxxxxxxxxxxxx
To: John.Garbutt@xxxxxxxxxx; alex_tkd@xxxxxxxx; openstack@xxxxxxxxxxxxxxxxxxx
Date: Wed, 28 Mar 2012 03:36:55 +0100
Subject: RE: [Openstack] [OpenStack] Xen HypervisorYes, I’d missed the code change in xenapi_conn.  Alexandre, are you sure you have a matched pair of nova-compute and the nova plugins?  If so, then please trace through xenapi_conn.call_plugin to find out why your host_uuid isn’t getting set.  It looks like it should to me, even on XCP. Thanks, Ewan. From: John Garbutt 
Sent: 26 March 2012 01:36
To: Ewan Mellor; Alexandre Leites; openstack@xxxxxxxxxxxxxxxxxxx
Subject: RE: [Openstack] [OpenStack] Xen Hypervisor I certainly changed the plugin so it always required the host_uuid, but I also changed the “call_plugin” code in xenapi_conn to ensure we always pass the host_uuid. Indeed it looks like in the code path below, that you should get the host_uuid passed all the way though. I have not tested with XCP myself, only with XenServer 6. I am afraid I will not get chance to try this out till Wednesday (currently on holiday). One other useful log will be from XCP where it logs the parameters passed into the plugin (on XenSever it is /var/log/xensource.log, it could be /var/log/xcp.log? or xapi.log, can’t remember I am afraid) You should be able to track the host_uuid to ensure it gets from nova->xapi client->xapi server->plugin If you want to move on with your deployment a work around is to add this into the xenhost plugin: Change:host_uuid=arg_dict[‘host_uuid’]Into:                host_uuid=_run_command(“xe host-list | grep uuid”).split(“:”)[-1].strip() The above does not work in all cases, but it should work for your particular case (no pools). If you could raise a nova bug for this, mentioning the version of XCP you are using, that would be great. I hope that helps,John From: Ewan Mellor 
Sent: 25 March 2012 19:56
To: Alexandre Leites; openstack@xxxxxxxxxxxxxxxxxxx
Cc: John Garbutt
Subject: RE: [Openstack] [OpenStack] Xen Hypervisor It looks like you’re hitting a recently introduced bug (maybe).  I haven’t run the code, but from reading through, it looks like the xenhost.host_data plugin command is going to barf if it is not passed a host_uuid parameter.  It used to gracefully handle that case, but since 37a392dc it’s not doing so any more.  The plugin is being called with no arguments from nova.virt.xenapi.host. Cc’d John Garbutt, who wrote that bit of code. Ewan. From: openstack-bounces+ewan.mellor=citrix.com@xxxxxxxxxxxxxxxxxxx [mailto:openstack-bounces+ewan.mellor=citrix.com@xxxxxxxxxxxxxxxxxxx] On Behalf Of Alexandre Leites
Sent: 23 March 2012 10:46
To: openstack@xxxxxxxxxxxxxxxxxxx
Subject: Re: [Openstack] [OpenStack] Xen Hypervisor Hi folks,

Sorry for late reply, i was trying to install this without following any ready-to-use scripts ( but i used one :( ) to understand how things are made. So i installed XCP 1.4.90 from DVD and configured it from installation screen.

Execute the following commands on dom0
------- Dom 0 Extra Config --------
cd /etc/xapi.d/plugins/
wget -q https://raw.github.com/openstack/nova/master/plugins/xenserver/xenapi/etc/xapi.d/plugins/xenhost
wget -q https://raw.github.com/openstack/nova/master/plugins/xenserver/xenapi/etc/xapi.d/plugins/xenstore.py
wget -q https://raw.github.com/openstack/nova/master/plugins/xenserver/xenapi/etc/xapi.d/plugins/pluginlib_nova.py
wget -q https://raw.github.com/openstack/nova/master/plugins/xenserver/xenapi/etc/xapi.d/plugins/migration
wget -q https://raw.github.com/openstack/nova/master/plugins/xenserver/xenapi/etc/xapi.d/plugins/glance
wget -q https://raw.github.com/openstack/nova/master/plugins/xenserver/xenapi/etc/xapi.d/plugins/agent
chmod 777 *
service xapi restart
-----------------------------------

After that i downloaded XenCenter and created a VM from Ubuntu Server 11.10 CD... (all from below is on guest)

After this, i did updated all the system with command apt-get update && apt-get upgrade -y && apt-get dist-upgrade -y and rebooted the machine. When you do this, you'll boot on newer kernel (3.0.0-16-server, check with uname -a command)... and unsinstall the old kernel (apt-get purge linux-image-3.0.0-12-server).

After this, i did installed the virtual kernel (3.0.0-16-virtual) and rebooted the machine. Check if you rebooted in this kernel via uname -a and unsinstall the old kernel.

After this, execute on dom0 again:
------- Dom 0 Extra Config --------
cd ~
wget http://94.212.78.134/res/xenserver/makepv.sh
chmod +x makepv.sh
./makepv.sh YOUR-XEN-GUEST-NAME
-----------------------------------

After this, you can do
apt-get install -y cracklib-runtime curl wget ssh openssh-server tcpdump ethtool python-pip git vim-nox sudo
and
pip install xenapi
wget http://images.ansolabs.com/xen/xe-guest-utilities_5.6.100-651_amd64.deb -O xe-guest-utilities_5.6.100-651_amd64.deb
dpkg -i xe-guest-utilities_5.6.100-651_amd64.deb
update-rc.d -f xe-linux-distribution remove
update-rc.d xe-linux-distribution defaults

mkdir -p /usr/share/cracklib
echo a | cracklib-packer
pwconv

echo root:password | chpasswd

rm -f /etc/localtime
groupadd libvirtd
useradd stack -s /bin/bash -d /opt/stack -G libvirtd
echo stack:password | chpasswd
echo "stack ALL=(ALL) NOPASSWD: ALL" >> etc/sudoers
mkdir -p /opt/stack
chown -R stack /opt/stack

After all this, you can install nova-compute , copy nova.conf from your controller and configure the following vars:
--connection_type=xenapi
--xenapi_connection_username=root
--xenapi_connection_password=password
--xenapi_connection_url=http://<<XENDOM0IP>>

and restart your nova-compute service.

You can test your XenAPI configuration using sudo nova-manage shell python and after pasting this
import XenAPI
import nova.virt.xenapi_conn
nova.virt.xenapi_conn.XenAPI = XenAPI
x = nova.virt.xenapi_conn.XenAPIConnection("http://<<XENDOM0IP>>","root","password")
x.list_instances()

------

After all this things, i got Xen working, but i have a error with bridge now, as trace below:

2012-03-23 16:30:00,116 DEBUG nova.virt.xenapi [-] Updating host stats from (pid=23556) update_status /usr/lib/python2.7/dist-packages/nova/virt/xenapi_conn.py:488
2012-03-23 16:30:00,988 WARNING nova.virt.xenapi [-] Task [Async.host.call_plugin] OpaqueRef:d7a9f0df-0c7c-a760-6b76-3e985c747b1d status: failure    ['XENAPI_PLUGIN_FAILURE', 'host_data', 'KeyError', "'host_uuid'"]
2012-03-23 16:30:00,992 WARNING nova.compute.manager [-] Error during report_driver_status(): ['XENAPI_PLUGIN_FAILURE', 'host_data', 'KeyError', "'host_uuid'"]
2

And yes, my permissions are 777 to all plugins.> From: todd.deshane@xxxxxxx
> Date: Wed, 21 Mar 2012 11:04:19 -0400
> To: openstack@xxxxxxxxxxxxxxxxxxx
> Subject: Re: [Openstack] [OpenStack] Xen Hypervisor
> 
> Just realized my reply was accidentally not sent to the list.
> 
> On Tue, Mar 20, 2012 at 2:03 PM, Todd Deshane <todd.deshane@xxxxxxx> wrote:
> > Please post specific error messages.
> >
> > Some general suggestions inline.
> >
> > On Tue, Mar 20, 2012 at 12:56 PM, Alexandre Leites <alex_tkd@xxxxxxxx> wrote:
> >> Hi folks,
> >>
> >> First let me say that i'm trying to install xen hypervisor and integrate it
> >> with OpenStack for more than one week. I'm studying OpenStack for a company
> >> and this company doesn't allow us to use ready scripts (Why? they want to be
> >> different from the whole world).
> >>
> >> I have used some links for references:
> >> https://github.com/openstack-dev/devstack/blob/master/tools/xen/README.md
> >> http://wiki.openstack.org/XenAPI
> >> http://wiki.openstack.org/XenServer/DevStack
> >> http://wiki.openstack.org/XenServer/Install
> >> http://wiki.openstack.org/XenServerDevelopment
> >> http://wiki.openstack.org/XenXCPAndXenServer
> >> http://wiki.xen.org/wiki/XAPI_on_Ubuntu
> >> http://wiki.xen.org/xenwiki/XAPI_on_debian
> >> https://github.com/openstack/openstack-chef/tree/master/cookbooks/xenserver
> >> https://review.openstack.org/#change,5419
> >>
> >> Me and my coworker are trying to install this and integrate on a running and
> >> tested OpenStack infrastructure, so this machines will have just
> >> nova-compute service. He is trying with XCP and I with XenServer, so let me
> >> introduces our tries:
> >>
> >> 1. XCP On Ubuntu (Kronos)
> >> * Install fine
> >> * Doesn't work
> >>
> >
> > There are devstack scripts that create the VMs for you on this
> > xcp-toolstack branch.
> > https://github.com/mcclurmc/devstack/tree/xcp-toolstack
> >
> > Kronos hasn't officially been released to anything stable yet, but
> > Ubuntu 12.04 and Debian Wheezy should have decently stable support.
> >
> >> 2. XCP On CentOS
> >> * Install fine
> >> * We can run a instance of Ubuntu using XenCenter
> >> * Installed nova-compute and configured it.
> >> * No Errors, but when we try to run a instance on it, appears on an error
> >> about XAPI.
> >> * We read something about privileged guest, how to set it?
> >>
> >
> > It sounds like you need to convert your VM to a PV guest and not a HVM guest.
> >
> > see:
> > https://lists.launchpad.net/openstack/msg06522.html
> >
> >> 3. DevStack (We can't use this, but also tried to)
> >> * Install XenServer (or XCP, we tested on both)
> >> * Following
> >> https://github.com/openstack-dev/devstack/blob/master/tools/xen/README.md
> >> guide
> >> * On Step 4, it wont create ALLINONE.xva and give some errors about
> >> directories on console (running script with root user on XenServer)
> >>
> > post the specific errors and we can help you work through it.
> >
> >> I hope that someone can help me solve this problems, and maybe help someone
> >> else to install Xen and integrate with OpenStack.
> >>
> >> @OffTopic
> >> Why this is so difficult?
> >
> > We are working on making everything work together smoothly. Sorry that
> > you have ran into so much trouble so far.
> >
> > Cheers,
> > Todd
> >
> > --
> > Todd Deshane
> > http://www.linkedin.com/in/deshantm
> > http://blog.xen.org/
> > http://wiki.xen.org/
> 
> 
> 
> -- 
> Todd Deshane
> http://www.linkedin.com/in/deshantm
> http://blog.xen.org/
> http://wiki.xen.org/
> 
> _______________________________________________
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@xxxxxxxxxxxxxxxxxxx
> Unsubscribe : https://launchpad.net/~openstack
> More help : https://help.launchpad.net/ListHelp 		 	   		   		 	   		  

Follow ups

References