yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #96616
[Bug 2127279] [NEW] "Shut off/Start" or "Hard reboot" deletes and recreates swtpm storage
Public bug reported:
Description
===========
swtpm virtual TPM is deleted and recreated during operations like "hard reboot" or "shut off/start".
Steps to reproduce
==================
1. Create virtual machine attached to swtpm TPM 2.0. Properties 'hw_tpm_model': 'tpm-crb', 'hw_tpm_version': '2.0' are added to image to create virtual machine with swtpm TPM 2.0.
2. Post boot, store secret in TPM.
3. Perform "Hard reboot" or "shut off/start" from OpenStack UI.
During hard reboot, TPM storage is deleted and recreated. Secret stored in step two is lost.
For use cases where TPM storage needs to be persistent like root partition key, losing TPM secret can be disruptive.
Expected result
===============
TPM should not lose stored secret.
Actual result
=============
Since TPM is deleted and recreated, stored secret is lost.
Environment
===========
libvirt + kvm
dpkg -l | grep nova
ii nova-common 3:30.0.0-0ubuntu1~cloud0 all OpenStack Compute - common files
ii nova-compute 3:30.0.0-0ubuntu1~cloud0 all OpenStack Compute - compute node base
ii nova-compute-kvm 3:30.0.0-0ubuntu1~cloud0 all OpenStack Compute - compute node (KVM)
ii nova-compute-libvirt 3:30.0.0-0ubuntu1~cloud0 all OpenStack Compute - compute node libvirt support
ii python3-nova 3:30.0.0-0ubuntu1~cloud0 all OpenStack Compute Python 3 libraries
ii python3-novaclient 2:18.7.0-0ubuntu1~cloud0 all client library for OpenStack Compute API - 3.x
Possible solutions
==================
Following two solutions seem to solve problem.
1. Adding VIR_DOMAIN_UNDEFINE_KEEP_TPM flag to domain.undefineFlags.
File: nova/virt/libvirt/guest.py
def delete_configuration(self):
"""Undefines a domain from hypervisor."""
try:
flags = libvirt.VIR_DOMAIN_UNDEFINE_MANAGED_SAVE
flags |= libvirt.VIR_DOMAIN_UNDEFINE_NVRAM
flags |= libvirt.VIR_DOMAIN_UNDEFINE_KEEP_TPM # <<<<<< New flag is added here
self._domain.undefineFlags(flags)
2. Create swtpm in persistent mode. i.e. pass "persistent='yes'" flag
when creating TPM.
File: virt/libvirt/config.py
def format_dom(self):
# <tpm model='$model'>
dev = super(LibvirtConfigGuestVTPM, self).format_dom()
dev.set("model", self.model)
# <backend type='emulator' version='$version'>
back = etree.Element("backend")
back.set("type", "emulator")
back.set("version", self.version)
back.set("persistent_state", "yes") # <<<<<< New flag is added here
# <encryption secret='$secret_uuid'/>
enc = etree.Element("encryption")
enc.set("secret", self.secret_uuid)
Both VIR_DOMAIN_UNDEFINE_KEEP_TPM and persistent='yes' seem to do more
or less same thing in libvirt code.
Should swtpm state be deleted when VM is deleted?
If yes, then it it make sense to pass KEEP_TPM to undefine domain for hard reboot scenario but not during delete scenario. So that swtpm is cleaned during VM delete.
** Affects: nova
Importance: Undecided
Status: New
** Description changed:
Description
===========
swtpm virtual TPM is deleted and recreated during operations like "hard reboot" or "shut off/start".
Steps to reproduce
==================
1. Create virtual machine attached to swtpm TPM 2.0. Properties 'hw_tpm_model': 'tpm-crb', 'hw_tpm_version': '2.0' are added to image to create virtual machine with swtpm TPM 2.0.
2. Post boot, store secret in TPM.
3. Perform "Hard reboot" or "shut off/start" from OpenStack UI.
- During reboot, TPM storage is deleted and recreated. Secret stored in step two is lost.
+ During hard reboot, TPM storage is deleted and recreated. Secret stored in step two is lost.
For use cases where TPM storage needs to be persistent like root partition key, losing TPM secret can be disruptive.
Expected result
===============
TPM should not lose stored secret.
Actual result
=============
Since TPM is deleted and recreated, stored secret is lost.
Environment
===========
libvirt + kvm
dpkg -l | grep nova
ii nova-common 3:30.0.0-0ubuntu1~cloud0 all OpenStack Compute - common files
ii nova-compute 3:30.0.0-0ubuntu1~cloud0 all OpenStack Compute - compute node base
ii nova-compute-kvm 3:30.0.0-0ubuntu1~cloud0 all OpenStack Compute - compute node (KVM)
ii nova-compute-libvirt 3:30.0.0-0ubuntu1~cloud0 all OpenStack Compute - compute node libvirt support
ii python3-nova 3:30.0.0-0ubuntu1~cloud0 all OpenStack Compute Python 3 libraries
ii python3-novaclient 2:18.7.0-0ubuntu1~cloud0 all client library for OpenStack Compute API - 3.x
Possible solutions
==================
Following two solutions seem to solve problem.
1. Adding VIR_DOMAIN_UNDEFINE_KEEP_TPM flag to domain.undefineFlags.
- File: nova/virt/libvirt/guest.py
- def delete_configuration(self):
- """Undefines a domain from hypervisor."""
- try:
- flags = libvirt.VIR_DOMAIN_UNDEFINE_MANAGED_SAVE
- flags |= libvirt.VIR_DOMAIN_UNDEFINE_NVRAM
- flags |= libvirt.VIR_DOMAIN_UNDEFINE_KEEP_TPM # <<<<<< New flag is added here
- self._domain.undefineFlags(flags)
+ File: nova/virt/libvirt/guest.py
+ def delete_configuration(self):
+ """Undefines a domain from hypervisor."""
+ try:
+ flags = libvirt.VIR_DOMAIN_UNDEFINE_MANAGED_SAVE
+ flags |= libvirt.VIR_DOMAIN_UNDEFINE_NVRAM
+ flags |= libvirt.VIR_DOMAIN_UNDEFINE_KEEP_TPM # <<<<<< New flag is added here
+ self._domain.undefineFlags(flags)
2. Create swtpm in persistent mode. i.e. pass "persistent='yes'" flag
when creating TPM.
- File: virt/libvirt/config.py
+ File: virt/libvirt/config.py
- def format_dom(self):
- # <tpm model='$model'>
- dev = super(LibvirtConfigGuestVTPM, self).format_dom()
- dev.set("model", self.model)
- # <backend type='emulator' version='$version'>
- back = etree.Element("backend")
- back.set("type", "emulator")
- back.set("version", self.version)
- back.set("persistent_state", "yes") # <<<<<< New flag is added here
- # <encryption secret='$secret_uuid'/>
- enc = etree.Element("encryption")
- enc.set("secret", self.secret_uuid)
+ def format_dom(self):
+ # <tpm model='$model'>
+ dev = super(LibvirtConfigGuestVTPM, self).format_dom()
+ dev.set("model", self.model)
+ # <backend type='emulator' version='$version'>
+ back = etree.Element("backend")
+ back.set("type", "emulator")
+ back.set("version", self.version)
+ back.set("persistent_state", "yes") # <<<<<< New flag is added here
+ # <encryption secret='$secret_uuid'/>
+ enc = etree.Element("encryption")
+ enc.set("secret", self.secret_uuid)
Both VIR_DOMAIN_UNDEFINE_KEEP_TPM and persistent='yes' seem to do more
or less same thing in libvirt code.
Should swtpm state be deleted when VM is deleted?
If yes, then it it make sense to pass KEEP_TPM to undefine domain for hard reboot scenario but not during delete scenario. So that swtpm is cleaned during VM delete.
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2127279
Title:
"Shut off/Start" or "Hard reboot" deletes and recreates swtpm storage
Status in OpenStack Compute (nova):
New
Bug description:
Description
===========
swtpm virtual TPM is deleted and recreated during operations like "hard reboot" or "shut off/start".
Steps to reproduce
==================
1. Create virtual machine attached to swtpm TPM 2.0. Properties 'hw_tpm_model': 'tpm-crb', 'hw_tpm_version': '2.0' are added to image to create virtual machine with swtpm TPM 2.0.
2. Post boot, store secret in TPM.
3. Perform "Hard reboot" or "shut off/start" from OpenStack UI.
During hard reboot, TPM storage is deleted and recreated. Secret stored in step two is lost.
For use cases where TPM storage needs to be persistent like root partition key, losing TPM secret can be disruptive.
Expected result
===============
TPM should not lose stored secret.
Actual result
=============
Since TPM is deleted and recreated, stored secret is lost.
Environment
===========
libvirt + kvm
dpkg -l | grep nova
ii nova-common 3:30.0.0-0ubuntu1~cloud0 all OpenStack Compute - common files
ii nova-compute 3:30.0.0-0ubuntu1~cloud0 all OpenStack Compute - compute node base
ii nova-compute-kvm 3:30.0.0-0ubuntu1~cloud0 all OpenStack Compute - compute node (KVM)
ii nova-compute-libvirt 3:30.0.0-0ubuntu1~cloud0 all OpenStack Compute - compute node libvirt support
ii python3-nova 3:30.0.0-0ubuntu1~cloud0 all OpenStack Compute Python 3 libraries
ii python3-novaclient 2:18.7.0-0ubuntu1~cloud0 all client library for OpenStack Compute API - 3.x
Possible solutions
==================
Following two solutions seem to solve problem.
1. Adding VIR_DOMAIN_UNDEFINE_KEEP_TPM flag to domain.undefineFlags.
File: nova/virt/libvirt/guest.py
def delete_configuration(self):
"""Undefines a domain from hypervisor."""
try:
flags = libvirt.VIR_DOMAIN_UNDEFINE_MANAGED_SAVE
flags |= libvirt.VIR_DOMAIN_UNDEFINE_NVRAM
flags |= libvirt.VIR_DOMAIN_UNDEFINE_KEEP_TPM # <<<<<< New flag is added here
self._domain.undefineFlags(flags)
2. Create swtpm in persistent mode. i.e. pass "persistent='yes'" flag
when creating TPM.
File: virt/libvirt/config.py
def format_dom(self):
# <tpm model='$model'>
dev = super(LibvirtConfigGuestVTPM, self).format_dom()
dev.set("model", self.model)
# <backend type='emulator' version='$version'>
back = etree.Element("backend")
back.set("type", "emulator")
back.set("version", self.version)
back.set("persistent_state", "yes") # <<<<<< New flag is added here
# <encryption secret='$secret_uuid'/>
enc = etree.Element("encryption")
enc.set("secret", self.secret_uuid)
Both VIR_DOMAIN_UNDEFINE_KEEP_TPM and persistent='yes' seem to do more
or less same thing in libvirt code.
Should swtpm state be deleted when VM is deleted?
If yes, then it it make sense to pass KEEP_TPM to undefine domain for hard reboot scenario but not during delete scenario. So that swtpm is cleaned during VM delete.
To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/2127279/+subscriptions