debcrafters-packages team mailing list archive
-
debcrafters-packages team
-
Mailing list archive
-
Message #04455
[Bug 2078347] Re: [UBUNTU 24.04] Udev/rules: Missing rules causes newly added CPUs to stay offline
I changed the testplan a bit, since 'lscpu -e' does not work like
expected for me (needs a separate investigation), but 'lscpu | grep
CPU\(s\)' does the job as well (or 'lscpu -p --online' and 'lscpu -p
--offline').
** Description changed:
SRU Justification:
[ Impact ]
- * Newly (online) CPUs that are hotpluggable and that are added to a
- live system, are not immediately used and stay offline.
-
- * They only become online after a reboot.
- But the reason to add CPUs is usually an immediate need for more
- capacity, so that a reboot with downtime is not wanted.
-
- * This can be addressed by a udev rule in /etc/udev/rules.d/ like:
- SUBSYSTEM=="cpu", ACTION=="add", CONST{arch}=="s390*", \
- ATTR{configure}=="1", TEST=="online", ATTR{online}!="1", ATTR{online}="1"
- that automatically hotplugs any (new) CPUs added to the system to
- configured state.
+ * Newly (online) CPUs that are hotpluggable and that are added to a
+ live system, are not immediately used and stay offline.
+
+ * They only become online after a reboot.
+ But the reason to add CPUs is usually an immediate need for more
+ capacity, so that a reboot with downtime is not wanted.
+
+ * This can be addressed by a udev rule in /etc/udev/rules.d/ like:
+ SUBSYSTEM=="cpu", ACTION=="add", CONST{arch}=="s390*", \
+ ATTR{configure}=="1", TEST=="online", ATTR{online}!="1", ATTR{online}="1"
+ that automatically hotplugs any (new) CPUs added to the system to
+ configured state.
[ Fix ]
- * 8dc06d14d769 ("udev: Introduce a rule to set newly hotplugged CPUs online")
- https://github.com/ibm-s390-linux/s390-tools/commit/\
- 8dc06d14d76940b3059cd6063fc7d8e7f0150271
+ * 8dc06d14d769 ("udev: Introduce a rule to set newly hotplugged CPUs online")
+ https://github.com/ibm-s390-linux/s390-tools/commit/\
+ 8dc06d14d76940b3059cd6063fc7d8e7f0150271
[ Test Plan ]
- * Adding CPU to a system in online and configured state is possible
- in LPARs, z/VM guests or KVM virtual machines.
- It's easiest to test this with the help of KVM.
-
- * Setup an Ubuntu Server 24.04 LPAR with KVM vrt-stack.
-
- * Define a KVM guest with hotpluggable and non-hotpluggable CPUs
- whereas the amount of used (current) CPUs is lower than the
- overall amount (let's say 6 out of 8):
- virsh dumpxml vm
- <domain type='kvm' id='106'>
- ...
- <vcpu placement='static' current='6'>8</vcpu>
- <vcpus>
- <vcpu id='0' enabled='yes' hotpluggable='no' order='1'/>
- <vcpu id='1' enabled='yes' hotpluggable='no' order='2'/>
- <vcpu id='2' enabled='yes' hotpluggable='yes' order='3'/>
- <vcpu id='3' enabled='yes' hotpluggable='yes' order='4'/>
- <vcpu id='4' enabled='yes' hotpluggable='yes' order='5'/>
- <vcpu id='5' enabled='yes' hotpluggable='yes' order='6'/>
- <vcpu id='6' enabled='no' hotpluggable='yes'/>
- <vcpu id='7' enabled='no' hotpluggable='yes'/>
- </vcpus>
-
- * Now attempt to add CPUs to the guest in a live "running" state.
- $ virsh setvcpus vm 8 --live
-
- * The KVM guest XML is updated :
- ...
- <vcpu placement='static'>8</vcpu>
- <vcpus>
- <vcpu id='0' enabled='yes' hotpluggable='no' order='1'/>
- <vcpu id='1' enabled='yes' hotpluggable='no' order='2'/>
- <vcpu id='2' enabled='yes' hotpluggable='yes' order='3'/>
- <vcpu id='3' enabled='yes' hotpluggable='yes' order='4'/>
- <vcpu id='4' enabled='yes' hotpluggable='yes' order='5'/>
- <vcpu id='5' enabled='yes' hotpluggable='yes' order='6'/>
- <vcpu id='6' enabled='yes' hotpluggable='yes' order='7'/>
- <vcpu id='7' enabled='yes' hotpluggable='yes' order='8'/>
- </vcpus>
-
- * But the CPUs inside of the guest still stay offline:
- $ lscpu -e
- CPU NODE DRAWER BOOK SOCKET CORE L1d:L1i:L2 ONLINE CONFIGURED POLARIZATION ADDRESS
- 0 0 0 0 0 0 0:0:0 yes yes horizontal 0
- 1 0 0 0 1 1 1:1:1 yes yes horizontal 1
- 2 0 0 0 2 2 2:2:2 yes yes horizontal 2
- 3 0 0 0 3 3 3:3:3 yes yes horizontal 3
- 4 0 0 0 4 4 4:4:4 yes yes horizontal 4
- 5 0 0 0 5 5 5:5:5 yes yes horizontal 5
- 6 - - - - - - no yes horizontal 6
- 7 - - - - - - no yes horizontal 7
-
- * The desired result, achieved with the help of the new udev rule
- is like this:
- $ lscpu -e
- CPU NODE DRAWER BOOK SOCKET CORE L1d:L1i:L2 ONLINE CONFIGURED POLARIZATION ADDRESS
- 0 0 0 0 0 0 0:0:0 yes yes horizontal 0
- 1 0 0 0 1 1 1:1:1 yes yes horizontal 1
- 2 0 0 0 2 2 2:2:2 yes yes horizontal 2
- 3 0 0 0 3 3 3:3:3 yes yes horizontal 3
- 4 0 0 0 4 4 4:4:4 yes yes horizontal 4
- 5 0 0 0 5 5 5:5:5 yes yes horizontal 5
- 6 0 0 0 6 6 6:6:6 yes yes horizontal 6
- 7 0 0 0 7 7 7:7:7 yes yes horizontal 7
-
- * Without the udev rule, this can only be achieved with a reboot.
- But it's not desired, since in cases where immediately new CPU
- capacity is needed, a downtime (caused by a reboot) is counterproductive.
+ * Adding CPU to a system in online and configured state is possible
+ in LPARs, z/VM guests or KVM virtual machines.
+ It's easiest to test this with the help of KVM.
+
+ * Setup an Ubuntu Server 24.04 LPAR with KVM vrt-stack.
+
+ * Define a KVM guest with hotpluggable and non-hotpluggable CPUs
+ whereas the amount of used (current) CPUs is lower than the
+ overall amount (let's say 6 out of 8):
+ virsh dumpxml vm | grep vcpu
+ <vcpu placement='static' current='6'>8</vcpu>
+ <vcpus>
+ <vcpu id='0' enabled='yes' hotpluggable='no' order='1'/>
+ <vcpu id='1' enabled='yes' hotpluggable='no' order='2'/>
+ <vcpu id='2' enabled='yes' hotpluggable='yes' order='3'/>
+ <vcpu id='3' enabled='yes' hotpluggable='yes' order='4'/>
+ <vcpu id='4' enabled='yes' hotpluggable='yes' order='5'/>
+ <vcpu id='5' enabled='yes' hotpluggable='yes' order='6'/>
+ <vcpu id='6' enabled='no' hotpluggable='yes'/>
+ <vcpu id='7' enabled='no' hotpluggable='yes'/>
+ </vcpus>
+
+ * Now attempt to add CPUs to the guest in a live "running" state.
+ $ virsh setvcpus vm 8 --live
+
+ * The KVM guest XML is updated:
+ virsh dumpxml vm | grep vcpu
+ <vcpu placement='static'>8</vcpu>
+ <vcpus>
+ <vcpu id='0' enabled='yes' hotpluggable='no' order='1'/>
+ <vcpu id='1' enabled='yes' hotpluggable='no' order='2'/>
+ <vcpu id='2' enabled='yes' hotpluggable='yes' order='3'/>
+ <vcpu id='3' enabled='yes' hotpluggable='yes' order='4'/>
+ <vcpu id='4' enabled='yes' hotpluggable='yes' order='5'/>
+ <vcpu id='5' enabled='yes' hotpluggable='yes' order='6'/>
+ <vcpu id='6' enabled='yes' hotpluggable='yes' order='7'/>
+ <vcpu id='7' enabled='yes' hotpluggable='yes' order='8'/>
+ </vcpus>
+
+ * But the CPUs inside of the guest still stay offline:
+ $ lscpu | grep CPU\(s\)
+ CPU(s): 6
+ On-line CPU(s) list: 0-5
+ NUMA node0 CPU(s): 0-5
+
+ * The desired result, achieved with the help of the new udev rule
+ $ lscpu | grep CPU\(s\)
+ CPU(s): 8
+ On-line CPU(s) list: 0-7
+ NUMA node0 CPU(s): 0-7
+
+ * Without the udev rule, this can only be achieved with a reboot.
+ But it's not desired, since in cases where immediately new CPU
+ capacity is needed, a downtime (caused by a reboot) is counterproductive.
[ Where problems could occur ]
- * This is only a configuration change and not a code change.
- However, things could still go wrong:
-
- - No or not desired functionality in case the udev rules
- is installed in a wrong folder.
-
- - The udev rule itself could be wrong.
- - wrong subsystem would lead to listen to a wrong event
- - wrong action would lead to wrong behavior
- - wrong architecture would run the rule not on s390x
- (as architecture constant "s390*" is used here,
- which covers s390x and s390, but Ubuntu supports
- s390x (64-bit) only, but since it's upstream like this
- it should be kept)
- - wrong attribute would lead to wrong behavior, action
- or status.
-
- * Since the architecture constant is s390(x), this affects
- IBM Z and LinuxONE only, and has no impact on other architectures.
+ * This is only a configuration change and not a code change.
+ However, things could still go wrong:
+
+ - No or not desired functionality in case the udev rules
+ is installed in a wrong folder.
+
+ - The udev rule itself could be wrong.
+ - wrong subsystem would lead to listen to a wrong event
+ - wrong action would lead to wrong behavior
+ - wrong architecture would run the rule not on s390x
+ (as architecture constant "s390*" is used here,
+ which covers s390x and s390, but Ubuntu supports
+ s390x (64-bit) only, but since it's upstream like this
+ it should be kept)
+ - wrong attribute would lead to wrong behavior, action
+ or status.
+
+ * Since the architecture constant is s390(x), this affects
+ IBM Z and LinuxONE only, and has no impact on other architectures.
[ Other Info ]
- * Since this is the same for all Linux distros,
- this udev rule was added to the s390-tools in general.
- (The s390-tools package already ships udev rules for other purposes.)
-
- * It's already included in questing, with the updated s390-tools
- version 2.38.0 (as part of LP: #2115416).
+ * Since this is the same for all Linux distros,
+ this udev rule was added to the s390-tools in general.
+ (The s390-tools package already ships udev rules for other purposes.)
+
+ * It's already included in questing, with the updated s390-tools
+ version 2.38.0 (as part of LP: #2115416).
__________
---Problem Description----------------------------------------------------------------------------------
Adding a configured CPU to a system (LPAR, ZVM or KVM) leaves that CPU configured but hotplugged off.
# lscpu -e
CPU NODE DRAWER BOOK SOCKET CORE L1d:L1i:L2 ONLINE CONFIGURED POLARIZATION ADDRESS
0 0 0 0 0 0 0:0:0 yes yes horizontal 0
1 0 0 0 1 1 1:1:1 yes yes horizontal 1
2 0 0 0 2 2 2:2:2 yes yes horizontal 2
3 0 0 0 3 3 3:3:3 yes yes horizontal 3
4 0 0 0 4 4 4:4:4 yes yes horizontal 4
5 0 0 0 5 5 5:5:5 yes yes horizontal 5
6 - - - - - - no yes horizontal 6
7 - - - - - - no yes horizontal 7
---Debugger---
A debugger is not configured
Machine Type = z/VM, LPAR
---uname output---
6.8.0-41-generic #41-Ubuntu SMP Fri Aug 2 19:51:49 UTC 2024 s390x s390x s390x GNU/Linux
---Steps to Reproduce---
Easiest way to reproduce is using a KVM guest to add new CPUs.
1. Before adding CPUs:
$ virsh dumpxml vm
<domain type='kvm' id='106'>
...
<vcpu placement='static' current='6'>8</vcpu>
<vcpus>
<vcpu id='0' enabled='yes' hotpluggable='no' order='1'/>
<vcpu id='1' enabled='yes' hotpluggable='no' order='2'/>
<vcpu id='2' enabled='yes' hotpluggable='yes' order='3'/>
<vcpu id='3' enabled='yes' hotpluggable='yes' order='4'/>
<vcpu id='4' enabled='yes' hotpluggable='yes' order='5'/>
<vcpu id='5' enabled='yes' hotpluggable='yes' order='6'/>
<vcpu id='6' enabled='no' hotpluggable='yes'/>
<vcpu id='7' enabled='no' hotpluggable='yes'/>
2. Attempt to add CPUs to the guest in a "running" state.
$ virsh setvcpus vm 8 --live
3. The guest XML is updated :
...
<vcpu placement='static'>8</vcpu>
<vcpus>
<vcpu id='0' enabled='yes' hotpluggable='no' order='1'/>
<vcpu id='1' enabled='yes' hotpluggable='no' order='2'/>
<vcpu id='2' enabled='yes' hotpluggable='yes' order='3'/>
<vcpu id='3' enabled='yes' hotpluggable='yes' order='4'/>
<vcpu id='4' enabled='yes' hotpluggable='yes' order='5'/>
<vcpu id='5' enabled='yes' hotpluggable='yes' order='6'/>
<vcpu id='6' enabled='yes' hotpluggable='yes' order='7'/>
<vcpu id='7' enabled='yes' hotpluggable='yes' order='8'/>
</vcpus>
4. But inside the guest, the CPUs are in offline state:
$ lscpu -e
CPU NODE DRAWER BOOK SOCKET CORE L1d:L1i:L2 ONLINE CONFIGURED POLARIZATION ADDRESS
0 0 0 0 0 0 0:0:0 yes yes horizontal 0
1 0 0 0 1 1 1:1:1 yes yes horizontal 1
2 0 0 0 2 2 2:2:2 yes yes horizontal 2
3 0 0 0 3 3 3:3:3 yes yes horizontal 3
4 0 0 0 4 4 4:4:4 yes yes horizontal 4
5 0 0 0 5 5 5:5:5 yes yes horizontal 5
6 - - - - - - no yes horizontal 6
7 - - - - - - no yes horizontal 7
5. Post rebooting the guest, the CPUs are online:
$ virsh reboot vm
Inside the guest:
$ lscpu -e
CPU NODE DRAWER BOOK SOCKET CORE L1d:L1i:L2 ONLINE CONFIGURED POLARIZATION ADDRESS
0 0 0 0 0 0 0:0:0 yes yes horizontal 0
1 0 0 0 1 1 1:1:1 yes yes horizontal 1
2 0 0 0 2 2 2:2:2 yes yes horizontal 2
3 0 0 0 3 3 3:3:3 yes yes horizontal 3
4 0 0 0 4 4 4:4:4 yes yes horizontal 4
5 0 0 0 5 5 5:5:5 yes yes horizontal 5
6 0 0 0 6 6 6:6:6 yes yes horizontal 6
7 0 0 0 7 7 7:7:7 yes yes horizontal 7
The CPUs should be online after adding them to the system.
Other distros already have a udev rule to circumvent this under;
/etc/udev/rules.d/
The rule does a check if a newly added CPUs are configured but not
online, then hotplugs it to make it online. If CPUs are NOT configured
then they should stay offline.
Contact Information = mete.durlu@xxxxxxx
--
You received this bug notification because you are a member of
Debcrafters packages, which is subscribed to s390-tools in Ubuntu.
https://bugs.launchpad.net/bugs/2078347
Title:
[UBUNTU 24.04] Udev/rules: Missing rules causes newly added CPUs to
stay offline
Status in Ubuntu on IBM z Systems:
In Progress
Status in s390-tools package in Ubuntu:
Fix Released
Status in s390-tools-signed package in Ubuntu:
Fix Released
Status in s390-tools source package in Noble:
Fix Committed
Status in s390-tools-signed source package in Noble:
Fix Committed
Status in s390-tools source package in Plucky:
Fix Committed
Status in s390-tools-signed source package in Plucky:
Fix Committed
Status in s390-tools source package in Questing:
Fix Released
Status in s390-tools-signed source package in Questing:
Fix Released
Bug description:
SRU Justification:
[ Impact ]
* Newly (online) CPUs that are hotpluggable and that are added to a
live system, are not immediately used and stay offline.
* They only become online after a reboot.
But the reason to add CPUs is usually an immediate need for more
capacity, so that a reboot with downtime is not wanted.
* This can be addressed by a udev rule in /etc/udev/rules.d/ like:
SUBSYSTEM=="cpu", ACTION=="add", CONST{arch}=="s390*", \
ATTR{configure}=="1", TEST=="online", ATTR{online}!="1", ATTR{online}="1"
that automatically hotplugs any (new) CPUs added to the system to
configured state.
[ Fix ]
* 8dc06d14d769 ("udev: Introduce a rule to set newly hotplugged CPUs online")
https://github.com/ibm-s390-linux/s390-tools/commit/\
8dc06d14d76940b3059cd6063fc7d8e7f0150271
[ Test Plan ]
* Adding CPU to a system in online and configured state is possible
in LPARs, z/VM guests or KVM virtual machines.
It's easiest to test this with the help of KVM.
* Setup an Ubuntu Server 24.04 LPAR with KVM vrt-stack.
* Define a KVM guest with hotpluggable and non-hotpluggable CPUs
whereas the amount of used (current) CPUs is lower than the
overall amount (let's say 6 out of 8):
virsh dumpxml vm | grep vcpu
<vcpu placement='static' current='6'>8</vcpu>
<vcpus>
<vcpu id='0' enabled='yes' hotpluggable='no' order='1'/>
<vcpu id='1' enabled='yes' hotpluggable='no' order='2'/>
<vcpu id='2' enabled='yes' hotpluggable='yes' order='3'/>
<vcpu id='3' enabled='yes' hotpluggable='yes' order='4'/>
<vcpu id='4' enabled='yes' hotpluggable='yes' order='5'/>
<vcpu id='5' enabled='yes' hotpluggable='yes' order='6'/>
<vcpu id='6' enabled='no' hotpluggable='yes'/>
<vcpu id='7' enabled='no' hotpluggable='yes'/>
</vcpus>
* Now attempt to add CPUs to the guest in a live "running" state.
$ virsh setvcpus vm 8 --live
* The KVM guest XML is updated:
virsh dumpxml vm | grep vcpu
<vcpu placement='static'>8</vcpu>
<vcpus>
<vcpu id='0' enabled='yes' hotpluggable='no' order='1'/>
<vcpu id='1' enabled='yes' hotpluggable='no' order='2'/>
<vcpu id='2' enabled='yes' hotpluggable='yes' order='3'/>
<vcpu id='3' enabled='yes' hotpluggable='yes' order='4'/>
<vcpu id='4' enabled='yes' hotpluggable='yes' order='5'/>
<vcpu id='5' enabled='yes' hotpluggable='yes' order='6'/>
<vcpu id='6' enabled='yes' hotpluggable='yes' order='7'/>
<vcpu id='7' enabled='yes' hotpluggable='yes' order='8'/>
</vcpus>
* But the CPUs inside of the guest still stay offline:
$ lscpu | grep CPU\(s\)
CPU(s): 6
On-line CPU(s) list: 0-5
NUMA node0 CPU(s): 0-5
* The desired result, achieved with the help of the new udev rule
$ lscpu | grep CPU\(s\)
CPU(s): 8
On-line CPU(s) list: 0-7
NUMA node0 CPU(s): 0-7
* Without the udev rule, this can only be achieved with a reboot.
But it's not desired, since in cases where immediately new CPU
capacity is needed, a downtime (caused by a reboot) is counterproductive.
[ Where problems could occur ]
* This is only a configuration change and not a code change.
However, things could still go wrong:
- No or not desired functionality in case the udev rules
is installed in a wrong folder.
- The udev rule itself could be wrong.
- wrong subsystem would lead to listen to a wrong event
- wrong action would lead to wrong behavior
- wrong architecture would run the rule not on s390x
(as architecture constant "s390*" is used here,
which covers s390x and s390, but Ubuntu supports
s390x (64-bit) only, but since it's upstream like this
it should be kept)
- wrong attribute would lead to wrong behavior, action
or status.
* Since the architecture constant is s390(x), this affects
IBM Z and LinuxONE only, and has no impact on other architectures.
[ Other Info ]
* Since this is the same for all Linux distros,
this udev rule was added to the s390-tools in general.
(The s390-tools package already ships udev rules for other purposes.)
* It's already included in questing, with the updated s390-tools
version 2.38.0 (as part of LP: #2115416).
__________
---Problem Description----------------------------------------------------------------------------------
Adding a configured CPU to a system (LPAR, ZVM or KVM) leaves that CPU configured but hotplugged off.
# lscpu -e
CPU NODE DRAWER BOOK SOCKET CORE L1d:L1i:L2 ONLINE CONFIGURED POLARIZATION ADDRESS
0 0 0 0 0 0 0:0:0 yes yes horizontal 0
1 0 0 0 1 1 1:1:1 yes yes horizontal 1
2 0 0 0 2 2 2:2:2 yes yes horizontal 2
3 0 0 0 3 3 3:3:3 yes yes horizontal 3
4 0 0 0 4 4 4:4:4 yes yes horizontal 4
5 0 0 0 5 5 5:5:5 yes yes horizontal 5
6 - - - - - - no yes horizontal 6
7 - - - - - - no yes horizontal 7
---Debugger---
A debugger is not configured
Machine Type = z/VM, LPAR
---uname output---
6.8.0-41-generic #41-Ubuntu SMP Fri Aug 2 19:51:49 UTC 2024 s390x s390x s390x GNU/Linux
---Steps to Reproduce---
Easiest way to reproduce is using a KVM guest to add new CPUs.
1. Before adding CPUs:
$ virsh dumpxml vm
<domain type='kvm' id='106'>
...
<vcpu placement='static' current='6'>8</vcpu>
<vcpus>
<vcpu id='0' enabled='yes' hotpluggable='no' order='1'/>
<vcpu id='1' enabled='yes' hotpluggable='no' order='2'/>
<vcpu id='2' enabled='yes' hotpluggable='yes' order='3'/>
<vcpu id='3' enabled='yes' hotpluggable='yes' order='4'/>
<vcpu id='4' enabled='yes' hotpluggable='yes' order='5'/>
<vcpu id='5' enabled='yes' hotpluggable='yes' order='6'/>
<vcpu id='6' enabled='no' hotpluggable='yes'/>
<vcpu id='7' enabled='no' hotpluggable='yes'/>
2. Attempt to add CPUs to the guest in a "running" state.
$ virsh setvcpus vm 8 --live
3. The guest XML is updated :
...
<vcpu placement='static'>8</vcpu>
<vcpus>
<vcpu id='0' enabled='yes' hotpluggable='no' order='1'/>
<vcpu id='1' enabled='yes' hotpluggable='no' order='2'/>
<vcpu id='2' enabled='yes' hotpluggable='yes' order='3'/>
<vcpu id='3' enabled='yes' hotpluggable='yes' order='4'/>
<vcpu id='4' enabled='yes' hotpluggable='yes' order='5'/>
<vcpu id='5' enabled='yes' hotpluggable='yes' order='6'/>
<vcpu id='6' enabled='yes' hotpluggable='yes' order='7'/>
<vcpu id='7' enabled='yes' hotpluggable='yes' order='8'/>
</vcpus>
4. But inside the guest, the CPUs are in offline state:
$ lscpu -e
CPU NODE DRAWER BOOK SOCKET CORE L1d:L1i:L2 ONLINE CONFIGURED POLARIZATION ADDRESS
0 0 0 0 0 0 0:0:0 yes yes horizontal 0
1 0 0 0 1 1 1:1:1 yes yes horizontal 1
2 0 0 0 2 2 2:2:2 yes yes horizontal 2
3 0 0 0 3 3 3:3:3 yes yes horizontal 3
4 0 0 0 4 4 4:4:4 yes yes horizontal 4
5 0 0 0 5 5 5:5:5 yes yes horizontal 5
6 - - - - - - no yes horizontal 6
7 - - - - - - no yes horizontal 7
5. Post rebooting the guest, the CPUs are online:
$ virsh reboot vm
Inside the guest:
$ lscpu -e
CPU NODE DRAWER BOOK SOCKET CORE L1d:L1i:L2 ONLINE CONFIGURED POLARIZATION ADDRESS
0 0 0 0 0 0 0:0:0 yes yes horizontal 0
1 0 0 0 1 1 1:1:1 yes yes horizontal 1
2 0 0 0 2 2 2:2:2 yes yes horizontal 2
3 0 0 0 3 3 3:3:3 yes yes horizontal 3
4 0 0 0 4 4 4:4:4 yes yes horizontal 4
5 0 0 0 5 5 5:5:5 yes yes horizontal 5
6 0 0 0 6 6 6:6:6 yes yes horizontal 6
7 0 0 0 7 7 7:7:7 yes yes horizontal 7
The CPUs should be online after adding them to the system.
Other distros already have a udev rule to circumvent this under;
/etc/udev/rules.d/
The rule does a check if a newly added CPUs are configured but not
online, then hotplugs it to make it online. If CPUs are NOT configured
then they should stay offline.
Contact Information = mete.durlu@xxxxxxx
To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu-z-systems/+bug/2078347/+subscriptions