yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #93368
[Bug 2051479] [NEW] instance always taking numa 0 first from host, even if flavor is configured to take memory from numa 1
Public bug reported:
Openstack version: Wallaby
Controller Details:
OS:
Distributor ID: Ubuntu
Description: Ubuntu 20.04.5 LTS
Release: 20.04
Codename: focal
flavor metadata details:
| properties | hw:cpu_policy='dedicated',
hw:mem_page_size='1GB', hw:numa_cpus.1='0,1,2,3,4,5,6,7,8,9',
hw:numa_mem.1='102400', hw:numa_nodes='2'
Compute Details:
Distributor ID: Ubuntu
Description: Ubuntu 20.04.5 LTS
Release: 20.04
Codename: focal
test@computedp:~$ numactl -H
available: 2 nodes (0-1)
node 0 cpus: 0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46
node 0 size: 515544 MB
node 0 free: 307020 MB
node 1 cpus: 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47
node 1 size: 516060 MB
node 1 free: 306605 MB
node distances:
node 0 1
0: 10 20
1: 20 10
test@computedp:~$ cat /proc/cmdline
BOOT_IMAGE=/vmlinuz-5.4.0-125-generic root=/dev/mapper/ubuntu--vg-ubuntu--lv ro intel_iommu=on iommu=pt isolcpus=2-47 nohz_full=2-47 default_hugepagesz=1G hugepagesz=1G hugepages=400
ISSUE:
I am trying to launch an instance which will take all memory (100GB in
my example) and cpus (8) only from numa 1 ( odd numa ), however, while
giving the above properties to the flavor, it is taking half memory from
numa 0 and half from numa 1 automatically, same for the cpus.
I have tried with hw:numa_nodes='1', but result is same.
If I give 1 Gb and 1 cpu from numa 0 and rest from numa 1 as below, it
works fine.
| properties | hw:cpu_policy='dedicated',
hw:mem_page_size='1GB', hw:numa_cpus.0='0',
hw:numa_cpus.1='1,2,3,4,5,6,7,8,9', hw:numa_mem.0='1024',
hw:numa_mem.1='101376', hw:numa_nodes='2' |
Note:
The system has sufficient hugepages, cpus in both the numa. There is no other instance running in the system.
No special configuration is there in nova-compute in compute or nova-scheduler in controller.
Can you please suggest the resolution to this problem?
Thanks
Subhajit
** Affects: nova
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2051479
Title:
instance always taking numa 0 first from host, even if flavor is
configured to take memory from numa 1
Status in OpenStack Compute (nova):
New
Bug description:
Openstack version: Wallaby
Controller Details:
OS:
Distributor ID: Ubuntu
Description: Ubuntu 20.04.5 LTS
Release: 20.04
Codename: focal
flavor metadata details:
| properties | hw:cpu_policy='dedicated',
hw:mem_page_size='1GB', hw:numa_cpus.1='0,1,2,3,4,5,6,7,8,9',
hw:numa_mem.1='102400', hw:numa_nodes='2'
Compute Details:
Distributor ID: Ubuntu
Description: Ubuntu 20.04.5 LTS
Release: 20.04
Codename: focal
test@computedp:~$ numactl -H
available: 2 nodes (0-1)
node 0 cpus: 0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46
node 0 size: 515544 MB
node 0 free: 307020 MB
node 1 cpus: 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47
node 1 size: 516060 MB
node 1 free: 306605 MB
node distances:
node 0 1
0: 10 20
1: 20 10
test@computedp:~$ cat /proc/cmdline
BOOT_IMAGE=/vmlinuz-5.4.0-125-generic root=/dev/mapper/ubuntu--vg-ubuntu--lv ro intel_iommu=on iommu=pt isolcpus=2-47 nohz_full=2-47 default_hugepagesz=1G hugepagesz=1G hugepages=400
ISSUE:
I am trying to launch an instance which will take all memory (100GB in
my example) and cpus (8) only from numa 1 ( odd numa ), however, while
giving the above properties to the flavor, it is taking half memory
from numa 0 and half from numa 1 automatically, same for the cpus.
I have tried with hw:numa_nodes='1', but result is same.
If I give 1 Gb and 1 cpu from numa 0 and rest from numa 1 as below, it
works fine.
| properties | hw:cpu_policy='dedicated',
hw:mem_page_size='1GB', hw:numa_cpus.0='0',
hw:numa_cpus.1='1,2,3,4,5,6,7,8,9', hw:numa_mem.0='1024',
hw:numa_mem.1='101376', hw:numa_nodes='2' |
Note:
The system has sufficient hugepages, cpus in both the numa. There is no other instance running in the system.
No special configuration is there in nova-compute in compute or nova-scheduler in controller.
Can you please suggest the resolution to this problem?
Thanks
Subhajit
To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/2051479/+subscriptions