yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #57515
[Bug 1582693] Re: Image and flavor metadata for libvirt watchdog is handled erroneously
You can see the glance metadef prefixes defined here:
https://github.com/openstack/glance/blob/master/etc/metadefs/compute-
libvirt.json#L7
"resource_type_associations": [
{
"name": "OS::Glance::Image",
"prefix": "hw_"
},
{
"name": "OS::Nova::Flavor",
"prefix": "hw:"
}
],
So that's why for an image the prefix is hw_ and we get hw_boot_menu,
but for a flavor extra spec we get hw:boot_menu. But if you look at the
watchdog action, it's not the same:
https://github.com/openstack/glance/blob/master/etc/metadefs/compute-
watchdog.json#L7
There is no prefix defined in resource_type_associations so the prefix
is built into the property name:
"properties": {
"hw_watchdog_action": {
"title": "Watchdog Action",
"description": "For the libvirt driver, you can enable and set the behavior of a virtual hardware watchdog device for each flavor. Watchdog devices keep an eye on the guest server, and carry out the configured action, if the server hangs. The watchdog uses the i6300esb device (emulating a PCI Intel 6300ESB). If hw_watchdog_action is not specified, the watchdog is disabled. Watchdog behavior set using a specific image's properties will override behavior set using flavors.",
"type": "string",
"enum": [
"disabled",
"reset",
"poweroff",
"pause",
"none"
]
}
}
Which is why we don't get hw:watchdog_action.
** Changed in: horizon
Status: New => Invalid
** Changed in: glance
Status: New => Triaged
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1582693
Title:
Image and flavor metadata for libvirt watchdog is handled erroneously
Status in Glance:
Triaged
Status in OpenStack Dashboard (Horizon):
Invalid
Status in OpenStack Compute (nova):
Invalid
Bug description:
When I use Horizon to add the Watchdog Action (hw_watchdog_action)
metadata to any flavor and I try to use that flavor to create an
instance then the boot process fails. However, if I add the same
metadata to an image then everything works flawlessly.
I used devstack to try to find out some details about this issue. (I
was able to reproduce this issue on stable/mitaka and on master as
well.) I found the following:
USE CASE #1 :: flavor + underscore
$ nova flavor-show m1.nano
+----------------------------+---------------------------------+
| Property | Value |
+----------------------------+---------------------------------+
| OS-FLV-DISABLED:disabled | False |
| OS-FLV-EXT-DATA:ephemeral | 0 |
| disk | 0 |
| extra_specs | {"hw_watchdog_action": "reset"} |
| id | 42 |
| name | m1.nano |
| os-flavor-access:is_public | True |
| ram | 64 |
| rxtx_factor | 1.0 |
| swap | |
| vcpus | 1 |
+----------------------------+---------------------------------+
$ nova boot --flavor m1.nano --image cirros-0.3.4-x86_64-uec-watchdog --poll vm0
Result: Fault
Message
No valid host was found. There are not enough hosts available.
Code
500
Details
File "/opt/stack/nova/nova/conductor/manager.py", line 392, in build_instances context, request_spec, filter_properties) File "/opt/stack/nova/nova/conductor/manager.py", line 436, in _schedule_instances hosts = self.scheduler_client.select_destinations(context, spec_obj) File "/opt/stack/nova/nova/scheduler/utils.py", line 372, in wrapped return func(*args, **kwargs) File "/opt/stack/nova/nova/scheduler/client/__init__.py", line 51, in select_destinations return self.queryclient.select_destinations(context, spec_obj) File "/opt/stack/nova/nova/scheduler/client/__init__.py", line 37, in __run_method return getattr(self.instance, __name)(*args, **kwargs) File "/opt/stack/nova/nova/scheduler/client/query.py", line 32, in select_destinations return self.scheduler_rpcapi.select_destinations(context, spec_obj) File "/opt/stack/nova/nova/scheduler/rpcapi.py", line 121, in select_destinations return cctxt.call(ctxt, 'select_destinations', **msg_args) File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/client.py", line 158, in call retry=self.retry) File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/transport.py", line 90, in _send timeout=timeout, retry=retry) File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", line 470, in send retry=retry) File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", line 461, in _send raise result
n-sch.log shows that
nova.scheduler.filters.compute_capabilities_filter removes the only
host available during filtering.
Use case #2 :: flavor + colon
$ nova flavor-show m1.nano
+----------------------------+---------------------------------+
| Property | Value |
+----------------------------+---------------------------------+
| OS-FLV-DISABLED:disabled | False |
| OS-FLV-EXT-DATA:ephemeral | 0 |
| disk | 0 |
| extra_specs | {"hw:watchdog_action": "reset"} |
| id | 42 |
| name | m1.nano |
| os-flavor-access:is_public | True |
| ram | 64 |
| rxtx_factor | 1.0 |
| swap | |
| vcpus | 1 |
+----------------------------+---------------------------------+
$ nova boot --flavor m1.nano --image cirros-0.3.4-x86_64-uec-watchdog --poll vm1
$ virsh dumpxml instance-00000131 | grep "<watchdog" -A 3
<watchdog model='i6300esb' action='reset'>
<alias name='watchdog0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
</watchdog>
Result: The instance boots perfectly and the /dev/watchdog device is
present.
USE CASE #3 :: image + underscore
$ nova flavor-show m1.nano
+----------------------------+---------+
| Property | Value |
+----------------------------+---------+
| OS-FLV-DISABLED:disabled | False |
| OS-FLV-EXT-DATA:ephemeral | 0 |
| disk | 0 |
| extra_specs | {} |
| id | 42 |
| name | m1.nano |
| os-flavor-access:is_public | True |
| ram | 64 |
| rxtx_factor | 1.0 |
| swap | |
| vcpus | 1 |
+----------------------------+---------+
$ nova image-show cirros-0.3.4-x86_64-uec-watchdog
+-----------------------------+--------------------------------------+
| Property | Value |
+-----------------------------+--------------------------------------+
| OS-EXT-IMG-SIZE:size | 13375488 |
| created | 2016-05-17T08:49:21Z |
| id | 863c2d04-cdd3-42c2-be78-c831c48929b3 |
| metadata hw_watchdog_action | reset |
| minDisk | 0 |
| minRam | 0 |
| name | cirros-0.3.4-x86_64-uec-watchdog |
| progress | 100 |
| status | ACTIVE |
| updated | 2016-05-17T09:10:59Z |
+-----------------------------+--------------------------------------+
$ nova boot --flavor m1.nano --image cirros-0.3.4-x86_64-uec-watchdog --poll vm2
$ virsh dumpxml instance-00000132 | grep "<watchdog" -A 3
<watchdog model='i6300esb' action='reset'>
<alias name='watchdog0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
</watchdog>
Result: The instance boots perfectly and the /dev/watchdog device is
present.
USE CASE #4 :: image + colon
$ nova image-show cirros-0.3.4-x86_64-uec-watchdog
+----------------------+--------------------------------------+
| Property | Value |
+----------------------+--------------------------------------+
| OS-EXT-IMG-SIZE:size | 13375488 |
| created | 2016-05-17T08:49:21Z |
| id | 863c2d04-cdd3-42c2-be78-c831c48929b3 |
| metadata hw | watchdog_action: reset |
| minDisk | 0 |
| minRam | 0 |
| name | cirros-0.3.4-x86_64-uec-watchdog |
| progress | 100 |
| status | ACTIVE |
| updated | 2016-05-17T09:16:42Z |
+----------------------+--------------------------------------+
$ nova boot --flavor m1.nano --image cirros-0.3.4-x86_64-uec-watchdog --poll vm2
$ virsh dumpxml instance-00000133 | grep "<watchdog" -A 3
Result: Seemingly there are no errors during the boot process, but the
watchdog device is not present.
To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1582693/+subscriptions
References