yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #50227
[Bug 1578155] [NEW] 'hw:cpu_thread_policy=prefer' misbehaviour
Public bug reported:
Description
===========
'hw:cpu_thread_policy=prefer' allocates vCPUs in pairs of sibling
threads properly. An odd number of vCPUs will allocate pairs and a
single one. That single one should not be isolated. So 20 available
threads, shall be able to allocate 4 VMs of 5 vCPUs. When booting up the
third VM, it is giving an error.
Steps to reproduce
==================
1.- Creating a flavor:
nova flavor-create pinning auto 1024 10 5
nova flavor-key pinning set hw:cpu_policy=dedicated
nova flavor-key pinning set hw:cpu_thread_policy=prefer
nova flavor-key pinning set hw:numa_nodes=1
2.- Booting up simple VMs:
nova boot testPin1 --flavor pinning --image cirros --nic net-id=$NET_ID
In my setup, I have 20 available threads:
NUMANode L#0 (P#0 32GB)
Socket L#0 + L3 L#0 (15MB)
L2 L#0 (256KB) + L1d L#0 (32KB) + L1i L#0 (32KB) + Core L#0
PU L#0 (P#0)
PU L#1 (P#12)
L2 L#1 (256KB) + L1d L#1 (32KB) + L1i L#1 (32KB) + Core L#1
PU L#2 (P#2)
PU L#3 (P#14)
L2 L#2 (256KB) + L1d L#2 (32KB) + L1i L#2 (32KB) + Core L#2
PU L#4 (P#4)
PU L#5 (P#16)
L2 L#3 (256KB) + L1d L#3 (32KB) + L1i L#3 (32KB) + Core L#3
PU L#6 (P#6)
PU L#7 (P#18)
L2 L#4 (256KB) + L1d L#4 (32KB) + L1i L#4 (32KB) + Core L#4
PU L#8 (P#8)
PU L#9 (P#20)
L2 L#5 (256KB) + L1d L#5 (32KB) + L1i L#5 (32KB) + Core L#5
PU L#10 (P#10)
PU L#11 (P#22)
NUMANode L#1 (P#1 32GB) + Socket L#1 + L3 L#1 (15MB)
L2 L#6 (256KB) + L1d L#6 (32KB) + L1i L#6 (32KB) + Core L#6
PU L#12 (P#1)
PU L#13 (P#13)
L2 L#7 (256KB) + L1d L#7 (32KB) + L1i L#7 (32KB) + Core L#7
PU L#14 (P#3)
PU L#15 (P#15)
L2 L#8 (256KB) + L1d L#8 (32KB) + L1i L#8 (32KB) + Core L#8
PU L#16 (P#5)
PU L#17 (P#17)
L2 L#9 (256KB) + L1d L#9 (32KB) + L1i L#9 (32KB) + Core L#9
PU L#18 (P#7)
PU L#19 (P#19)
L2 L#10 (256KB) + L1d L#10 (32KB) + L1i L#10 (32KB) + Core L#10
PU L#20 (P#9)
PU L#21 (P#21)
L2 L#11 (256KB) + L1d L#11 (32KB) + L1i L#11 (32KB) + Core L#11
PU L#22 (P#11)
PU L#23 (P#23)
Using the cpu_thread_policy:prefer, the behaviour is ok for the first
two VMs. So 5 threads are allocated in pairs.
[root@nfvsdn-04 ~(keystone_admin)]# virsh vcpupin 2
VCPU: CPU Affinity
----------------------------------
0: 10
1: 22
2: 16
3: 4
4: 8
[root@nfvsdn-04 ~(keystone_admin)]# virsh vcpupin 3
VCPU: CPU Affinity
----------------------------------
0: 17
1: 5
2: 3
3: 15
4: 11
However, eventhough there are enough threads in order to allocate
another 2 VMs with the same flavor, I get the following error booting up
the third VM:
INFO nova.filters Filtering removed all hosts for the request with
instance ID 'cbb53e29-a7da-4c14-a3ad-4fb3aa04f101'. Filter results:
['RetryFilter: (start: 1, end: 1)', 'AvailabilityZoneFilter: (start: 1,
end: 1)', 'RamFilter: (start: 1, end: 1)', 'C[r│omputeFilter: (start: 1,
end: 1)', 'ComputeCapabilitiesFilter: (start: 1, end: 1)',
'ImagePropertiesFilter: (start: 1, end: 1)', 'CoreFilter: (start: 1,
end: 1)', 'NUMATopologyFilter: (start: 1, endo: 0)']
There should be enough space for 4 VMs allocated with cpu_thread_policy=prefer flavor.
Expected result
===============
To have 4 VMs up&running with flavor 'pinning'.
Actual result
=============
3rd VM fails at scheduling.
Environment
==========
All-in-one environment.
** Affects: nova
Importance: Undecided
Status: New
** Tags: cpu prefer thread
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1578155
Title:
'hw:cpu_thread_policy=prefer' misbehaviour
Status in OpenStack Compute (nova):
New
Bug description:
Description
===========
'hw:cpu_thread_policy=prefer' allocates vCPUs in pairs of sibling
threads properly. An odd number of vCPUs will allocate pairs and a
single one. That single one should not be isolated. So 20 available
threads, shall be able to allocate 4 VMs of 5 vCPUs. When booting up
the third VM, it is giving an error.
Steps to reproduce
==================
1.- Creating a flavor:
nova flavor-create pinning auto 1024 10 5
nova flavor-key pinning set hw:cpu_policy=dedicated
nova flavor-key pinning set hw:cpu_thread_policy=prefer
nova flavor-key pinning set hw:numa_nodes=1
2.- Booting up simple VMs:
nova boot testPin1 --flavor pinning --image cirros --nic net-
id=$NET_ID
In my setup, I have 20 available threads:
NUMANode L#0 (P#0 32GB)
Socket L#0 + L3 L#0 (15MB)
L2 L#0 (256KB) + L1d L#0 (32KB) + L1i L#0 (32KB) + Core L#0
PU L#0 (P#0)
PU L#1 (P#12)
L2 L#1 (256KB) + L1d L#1 (32KB) + L1i L#1 (32KB) + Core L#1
PU L#2 (P#2)
PU L#3 (P#14)
L2 L#2 (256KB) + L1d L#2 (32KB) + L1i L#2 (32KB) + Core L#2
PU L#4 (P#4)
PU L#5 (P#16)
L2 L#3 (256KB) + L1d L#3 (32KB) + L1i L#3 (32KB) + Core L#3
PU L#6 (P#6)
PU L#7 (P#18)
L2 L#4 (256KB) + L1d L#4 (32KB) + L1i L#4 (32KB) + Core L#4
PU L#8 (P#8)
PU L#9 (P#20)
L2 L#5 (256KB) + L1d L#5 (32KB) + L1i L#5 (32KB) + Core L#5
PU L#10 (P#10)
PU L#11 (P#22)
NUMANode L#1 (P#1 32GB) + Socket L#1 + L3 L#1 (15MB)
L2 L#6 (256KB) + L1d L#6 (32KB) + L1i L#6 (32KB) + Core L#6
PU L#12 (P#1)
PU L#13 (P#13)
L2 L#7 (256KB) + L1d L#7 (32KB) + L1i L#7 (32KB) + Core L#7
PU L#14 (P#3)
PU L#15 (P#15)
L2 L#8 (256KB) + L1d L#8 (32KB) + L1i L#8 (32KB) + Core L#8
PU L#16 (P#5)
PU L#17 (P#17)
L2 L#9 (256KB) + L1d L#9 (32KB) + L1i L#9 (32KB) + Core L#9
PU L#18 (P#7)
PU L#19 (P#19)
L2 L#10 (256KB) + L1d L#10 (32KB) + L1i L#10 (32KB) + Core L#10
PU L#20 (P#9)
PU L#21 (P#21)
L2 L#11 (256KB) + L1d L#11 (32KB) + L1i L#11 (32KB) + Core L#11
PU L#22 (P#11)
PU L#23 (P#23)
Using the cpu_thread_policy:prefer, the behaviour is ok for the first
two VMs. So 5 threads are allocated in pairs.
[root@nfvsdn-04 ~(keystone_admin)]# virsh vcpupin 2
VCPU: CPU Affinity
----------------------------------
0: 10
1: 22
2: 16
3: 4
4: 8
[root@nfvsdn-04 ~(keystone_admin)]# virsh vcpupin 3
VCPU: CPU Affinity
----------------------------------
0: 17
1: 5
2: 3
3: 15
4: 11
However, eventhough there are enough threads in order to allocate
another 2 VMs with the same flavor, I get the following error booting
up the third VM:
INFO nova.filters Filtering removed all hosts for the request with
instance ID 'cbb53e29-a7da-4c14-a3ad-4fb3aa04f101'. Filter results:
['RetryFilter: (start: 1, end: 1)', 'AvailabilityZoneFilter: (start:
1, end: 1)', 'RamFilter: (start: 1, end: 1)', 'C[r│omputeFilter:
(start: 1, end: 1)', 'ComputeCapabilitiesFilter: (start: 1, end: 1)',
'ImagePropertiesFilter: (start: 1, end: 1)', 'CoreFilter: (start: 1,
end: 1)', 'NUMATopologyFilter: (start: 1, endo: 0)']
There should be enough space for 4 VMs allocated with cpu_thread_policy=prefer flavor.
Expected result
===============
To have 4 VMs up&running with flavor 'pinning'.
Actual result
=============
3rd VM fails at scheduling.
Environment
==========
All-in-one environment.
To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1578155/+subscriptions
Follow ups