yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #57513
[Bug 1582693] Re: Image and flavor metadata for libvirt watchdog is handled erroneously
This may actually be a problem with the glance metadata catalog, which
it looks like that's what horizon uses to build the flavor extra specs
dialog. For example, hw:boot_menu and hw:cpu_policy have the expected
namespace: prefix, but hw_watchdog_action does not. There are other more
exotic things too like capabilities:cpu_info:features and
CIM_PASD_InstructionSet, the former is probably OK, the latter is
probably not.
** Changed in: nova
Status: In Progress => Invalid
** Also affects: glance
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1582693
Title:
Image and flavor metadata for libvirt watchdog is handled erroneously
Status in Glance:
Triaged
Status in OpenStack Dashboard (Horizon):
Invalid
Status in OpenStack Compute (nova):
Invalid
Bug description:
When I use Horizon to add the Watchdog Action (hw_watchdog_action)
metadata to any flavor and I try to use that flavor to create an
instance then the boot process fails. However, if I add the same
metadata to an image then everything works flawlessly.
I used devstack to try to find out some details about this issue. (I
was able to reproduce this issue on stable/mitaka and on master as
well.) I found the following:
USE CASE #1 :: flavor + underscore
$ nova flavor-show m1.nano
+----------------------------+---------------------------------+
| Property | Value |
+----------------------------+---------------------------------+
| OS-FLV-DISABLED:disabled | False |
| OS-FLV-EXT-DATA:ephemeral | 0 |
| disk | 0 |
| extra_specs | {"hw_watchdog_action": "reset"} |
| id | 42 |
| name | m1.nano |
| os-flavor-access:is_public | True |
| ram | 64 |
| rxtx_factor | 1.0 |
| swap | |
| vcpus | 1 |
+----------------------------+---------------------------------+
$ nova boot --flavor m1.nano --image cirros-0.3.4-x86_64-uec-watchdog --poll vm0
Result: Fault
Message
No valid host was found. There are not enough hosts available.
Code
500
Details
File "/opt/stack/nova/nova/conductor/manager.py", line 392, in build_instances context, request_spec, filter_properties) File "/opt/stack/nova/nova/conductor/manager.py", line 436, in _schedule_instances hosts = self.scheduler_client.select_destinations(context, spec_obj) File "/opt/stack/nova/nova/scheduler/utils.py", line 372, in wrapped return func(*args, **kwargs) File "/opt/stack/nova/nova/scheduler/client/__init__.py", line 51, in select_destinations return self.queryclient.select_destinations(context, spec_obj) File "/opt/stack/nova/nova/scheduler/client/__init__.py", line 37, in __run_method return getattr(self.instance, __name)(*args, **kwargs) File "/opt/stack/nova/nova/scheduler/client/query.py", line 32, in select_destinations return self.scheduler_rpcapi.select_destinations(context, spec_obj) File "/opt/stack/nova/nova/scheduler/rpcapi.py", line 121, in select_destinations return cctxt.call(ctxt, 'select_destinations', **msg_args) File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/client.py", line 158, in call retry=self.retry) File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/transport.py", line 90, in _send timeout=timeout, retry=retry) File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", line 470, in send retry=retry) File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", line 461, in _send raise result
n-sch.log shows that
nova.scheduler.filters.compute_capabilities_filter removes the only
host available during filtering.
Use case #2 :: flavor + colon
$ nova flavor-show m1.nano
+----------------------------+---------------------------------+
| Property | Value |
+----------------------------+---------------------------------+
| OS-FLV-DISABLED:disabled | False |
| OS-FLV-EXT-DATA:ephemeral | 0 |
| disk | 0 |
| extra_specs | {"hw:watchdog_action": "reset"} |
| id | 42 |
| name | m1.nano |
| os-flavor-access:is_public | True |
| ram | 64 |
| rxtx_factor | 1.0 |
| swap | |
| vcpus | 1 |
+----------------------------+---------------------------------+
$ nova boot --flavor m1.nano --image cirros-0.3.4-x86_64-uec-watchdog --poll vm1
$ virsh dumpxml instance-00000131 | grep "<watchdog" -A 3
<watchdog model='i6300esb' action='reset'>
<alias name='watchdog0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
</watchdog>
Result: The instance boots perfectly and the /dev/watchdog device is
present.
USE CASE #3 :: image + underscore
$ nova flavor-show m1.nano
+----------------------------+---------+
| Property | Value |
+----------------------------+---------+
| OS-FLV-DISABLED:disabled | False |
| OS-FLV-EXT-DATA:ephemeral | 0 |
| disk | 0 |
| extra_specs | {} |
| id | 42 |
| name | m1.nano |
| os-flavor-access:is_public | True |
| ram | 64 |
| rxtx_factor | 1.0 |
| swap | |
| vcpus | 1 |
+----------------------------+---------+
$ nova image-show cirros-0.3.4-x86_64-uec-watchdog
+-----------------------------+--------------------------------------+
| Property | Value |
+-----------------------------+--------------------------------------+
| OS-EXT-IMG-SIZE:size | 13375488 |
| created | 2016-05-17T08:49:21Z |
| id | 863c2d04-cdd3-42c2-be78-c831c48929b3 |
| metadata hw_watchdog_action | reset |
| minDisk | 0 |
| minRam | 0 |
| name | cirros-0.3.4-x86_64-uec-watchdog |
| progress | 100 |
| status | ACTIVE |
| updated | 2016-05-17T09:10:59Z |
+-----------------------------+--------------------------------------+
$ nova boot --flavor m1.nano --image cirros-0.3.4-x86_64-uec-watchdog --poll vm2
$ virsh dumpxml instance-00000132 | grep "<watchdog" -A 3
<watchdog model='i6300esb' action='reset'>
<alias name='watchdog0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
</watchdog>
Result: The instance boots perfectly and the /dev/watchdog device is
present.
USE CASE #4 :: image + colon
$ nova image-show cirros-0.3.4-x86_64-uec-watchdog
+----------------------+--------------------------------------+
| Property | Value |
+----------------------+--------------------------------------+
| OS-EXT-IMG-SIZE:size | 13375488 |
| created | 2016-05-17T08:49:21Z |
| id | 863c2d04-cdd3-42c2-be78-c831c48929b3 |
| metadata hw | watchdog_action: reset |
| minDisk | 0 |
| minRam | 0 |
| name | cirros-0.3.4-x86_64-uec-watchdog |
| progress | 100 |
| status | ACTIVE |
| updated | 2016-05-17T09:16:42Z |
+----------------------+--------------------------------------+
$ nova boot --flavor m1.nano --image cirros-0.3.4-x86_64-uec-watchdog --poll vm2
$ virsh dumpxml instance-00000133 | grep "<watchdog" -A 3
Result: Seemingly there are no errors during the boot process, but the
watchdog device is not present.
To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1582693/+subscriptions
References