yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #83525
[Bug 1851587] Re: HypervisorUnavailable error leaks compute host fqdn to non-admin users
Reviewed: https://review.opendev.org/743950
Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=a89ffab83261060bbb9dedb2b8de6297b2d07efd
Submitter: Zuul
Branch: master
commit a89ffab83261060bbb9dedb2b8de6297b2d07efd
Author: Praharshitha Metla <harshavardhan.metla@xxxxxxx>
Date: Thu Jul 30 16:30:06 2020 +0530
Removed the host FQDN from the exception message
Deletion of an instance after disabling the hypervisor by a non-admin
user leaks the host fqdn in fault msg of instance.Removing the
'host' field from the error message of HypervisorUnavaiable
cause it's leaking host fqdn to non-admin users. The admin user will
see the Hypervisor unavailable exception msg but will be able to figure
on which compute host the guest is on and that the connection is broken.
Change-Id: I0eae19399670f59c17c9a1a24e1bfcbf1b514e7b
Closes-Bug: #1851587
** Changed in: nova
Status: In Progress => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1851587
Title:
HypervisorUnavailable error leaks compute host fqdn to non-admin users
Status in OpenStack Compute (nova):
Fix Released
Status in OpenStack Security Advisory:
Won't Fix
Bug description:
Description
===========
When an instance encounters a HypervisorUnavailable error, the non-admin user gets the info of the compute host fqdn in the error message.
Steps to reproduce
==================
1. Spin up an instance with non-admin user credentials
2. To reproduce the error, stop the libvirtd service on the compute host containing instance
3. Delete the instance
4. Deletion fails providing HypervisorUnavailable error
Expected result
===============
Error does not show compute host fqdn to a non-admin user
Actual result
=============
#spin up an instance
+--------------------------------------+------------+--------+------------+-------------+-------------------------------------+------------------------------+--------------------------------------+-------------+-----------+-------------------+------+------------+
| ID | Name | Status | Task State | Power State | Networks | Image Name | Image ID | Flavor Name | Flavor ID | Availability Zone | Host | Properties |
+--------------------------------------+------------+--------+------------+-------------+-------------------------------------+------------------------------+--------------------------------------+-------------+-----------+-------------------+------+------------+
| 4f42886d-e1f8-4607-a09d-0dc12a681880 | test-11869 | ACTIVE | None | Running | private=192.168.100.158, 10.0.0.243 | cirros-0.4.0-x86_64-disk.img | 5d0bd6a5-7331-4ebe-9328-d126189897e2 | | | nova | | |
+--------------------------------------+------------+--------+------------+-------------+-------------------------------------+------------------------------+--------------------------------------+-------------+-----------+-------------------+------+------------+
#instance is running on compute-0 node (only admin knows this)
[heat-admin@compute-0 ~]$ sudo virsh list --all
Id Name State
----------------------------------------------------
108 instance-00000092 running
#stop libvirtd service
[root@compute-0 heat-admin]# systemctl stop tripleo_nova_libvirt.service
[root@compute-0 heat-admin]# systemctl status tripleo_nova_libvirt.service
● tripleo_nova_libvirt.service - nova_libvirt container
Loaded: loaded (/etc/systemd/system/tripleo_nova_libvirt.service; enabled; vendor preset: disabled)
Active: inactive (dead) since Wed 2019-11-06 22:48:25 UTC; 5s ago
Process: 8514 ExecStop=/usr/bin/podman stop -t 10 nova_libvirt (code=exited, status=0/SUCCESS)
Main PID: 3783
Nov 06 22:29:48 compute-0 podman[3396]: 2019-11-06 22:29:48.443603571 +0000 UTC m=+1.325620613 container init a3e32121d12929e663b899b57cb7bc87581ddf5bdfb19cf8fee4bace41cb19bb (image=undercloud-0.ctlpla>
Nov 06 22:29:48 compute-0 podman[3396]: 2019-11-06 22:29:48.475946808 +0000 UTC m=+1.357963869 container start a3e32121d12929e663b899b57cb7bc87581ddf5bdfb19cf8fee4bace41cb19bb (image=undercloud-0.ctlpl>
Nov 06 22:29:48 compute-0 paunch-start-podman-container[3385]: nova_libvirt
Nov 06 22:29:48 compute-0 paunch-start-podman-container[3385]: Creating additional drop-in dependency for "nova_libvirt" (a3e32121d12929e663b899b57cb7bc87581ddf5bdfb19cf8fee4bace41cb19bb)
Nov 06 22:29:49 compute-0 systemd[1]: Started nova_libvirt container.
Nov 06 22:48:24 compute-0 systemd[1]: Stopping nova_libvirt container...
Nov 06 22:48:25 compute-0 podman[8514]: 2019-11-06 22:48:25.595405651 +0000 UTC m=+1.063832024 container died a3e32121d12929e663b899b57cb7bc87581ddf5bdfb19cf8fee4bace41cb19bb (image=undercloud-0.ctlpla>
Nov 06 22:48:25 compute-0 podman[8514]: 2019-11-06 22:48:25.597210594 +0000 UTC m=+1.065636903 container stop a3e32121d12929e663b899b57cb7bc87581ddf5bdfb19cf8fee4bace41cb19bb (image=undercloud-0.ctlpla>
Nov 06 22:48:25 compute-0 podman[8514]: a3e32121d12929e663b899b57cb7bc87581ddf5bdfb19cf8fee4bace41cb19bb
Nov 06 22:48:25 compute-0 systemd[1]: Stopped nova_libvirt container.
#delete the instance, it leaks compute host fqdn to the non-admin user
(overcloud) [stack@undercloud-0 ~]$ nova delete test-11869
Request to delete server test-11869 has been accepted.
(overcloud) [stack@undercloud-0 ~]$ openstack server list --long
+--------------------------------------+------------+--------+------------+-------------+----------+------------------------------+--------------------------------------+-------------+-----------+-------------------+------+------------+
| ID | Name | Status | Task State | Power State | Networks | Image Name | Image ID | Flavor Name | Flavor ID | Availability Zone | Host | Properties |
+--------------------------------------+------------+--------+------------+-------------+----------+------------------------------+--------------------------------------+-------------+-----------+-------------------+------+------------+
| 4f42886d-e1f8-4607-a09d-0dc12a681880 | test-11869 | ERROR | None | Running | | cirros-0.4.0-x86_64-disk.img | 5d0bd6a5-7331-4ebe-9328-d126189897e2 | | | nova | | |
+--------------------------------------+------------+--------+------------+-------------+----------+------------------------------+--------------------------------------+-------------+-----------+-------------------+------+------------+
(overcloud) [stack@undercloud-0 ~]$ openstack server show test-11869 <---debug output attached in logs
+-----------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+-----------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | nova |
| OS-EXT-STS:power_state | Running |
| OS-EXT-STS:task_state | None |
| OS-EXT-STS:vm_state | error |
| OS-SRV-USG:launched_at | 2019-11-06T22:13:08.000000 |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | |
| config_drive | |
| created | 2019-11-06T22:12:57Z |
| description | None |
| fault | {'code': 500, 'created': '2019-11-06T23:01:45Z', 'message': 'Connection to the hypervisor is broken on host: compute-0.redhat.local'} |
| flavor | disk='1', ephemeral='0', , original_name='m1.tiny', ram='512', swap='0', vcpus='1' |
| hostId | c7e6bf58b57f435659bb0aa9637c7f830f776ec202a0d6e430ee3168 |
| id | 4f42886d-e1f8-4607-a09d-0dc12a681880 |
| image | cirros-0.4.0-x86_64-disk.img (5d0bd6a5-7331-4ebe-9328-d126189897e2) |
| key_name | None |
| locked | False |
| locked_reason | None |
| name | test-11869 |
| project_id | 6e39619e17a9478580c93120e1cb16bc |
| properties | |
| server_groups | [] |
| status | ERROR |
| tags | [] |
| trusted_image_certificates | None |
| updated | 2019-11-06T23:01:45Z |
| user_id | 3cd6a8cb88eb49d3a84f9e67d89df598 |
| volumes_attached | |
+-----------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1851587/+subscriptions
References