yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #33652
[Bug 1464286] [NEW] NumaTopololgyFilter Not behaving as expected (returns 0 hosts)
Public bug reported:
I have a system with 32 cores (2 sockets, 8 cores, hyperthreading enabled).
The NUMA topology as follows:
numactl --hardware
available: 2 nodes (0-1)
node 0 cpus: 0 1 2 3 4 5 6 7 16 17 18 19 20 21 22 23
node 0 size: 65501 MB
node 0 free: 38562 MB
node 1 cpus: 8 9 10 11 12 13 14 15 24 25 26 27 28 29 30 31
node 1 size: 65535 MB
node 1 free: 63846 MB
node distances:
node 0 1
0: 10 20
1: 20 10
I have defined an flavor in Openstack with 12 vcpus as follows:
nova flavor-show c4.3xlarge
+----------------------------+------------------------------------------------------+
| Property | Value |
+----------------------------+------------------------------------------------------+
| OS-FLV-DISABLED:disabled | False |
| OS-FLV-EXT-DATA:ephemeral | 0 |
| disk | 40 |
| extra_specs | {"hw:cpu_policy": "dedicated", "hw:numa_nodes": "1"} |
| id | 1d76a225-90c1-4f6f-a59b-000795c33e63 |
| name | c4.3xlarge |
| os-flavor-access:is_public | True |
| ram | 24576 |
| rxtx_factor | 1.0 |
| swap | 8192 |
| vcpus | 12 |
+----------------------------+------------------------------------------------------+
I expect to be able to launch two instances of this flavor on the 32
core host, one contained within each NUMA node.
When I launch two instances, the first succeeds, but the second fails.
The instance xml is attached, along with the system capabilities.
If I change hw:numa_nodes = 2, then I can launch two copies of the
instance.
N.B for the purposes of testing I have disabled all vcpu_pin and isolcpu
settings.
** Affects: nova
Importance: Undecided
Status: New
** Attachment added: "Virsh capabilities and instance xml"
https://bugs.launchpad.net/bugs/1464286/+attachment/4413249/+files/instance_and_virsh_data.xml
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1464286
Title:
NumaTopololgyFilter Not behaving as expected (returns 0 hosts)
Status in OpenStack Compute (Nova):
New
Bug description:
I have a system with 32 cores (2 sockets, 8 cores, hyperthreading enabled).
The NUMA topology as follows:
numactl --hardware
available: 2 nodes (0-1)
node 0 cpus: 0 1 2 3 4 5 6 7 16 17 18 19 20 21 22 23
node 0 size: 65501 MB
node 0 free: 38562 MB
node 1 cpus: 8 9 10 11 12 13 14 15 24 25 26 27 28 29 30 31
node 1 size: 65535 MB
node 1 free: 63846 MB
node distances:
node 0 1
0: 10 20
1: 20 10
I have defined an flavor in Openstack with 12 vcpus as follows:
nova flavor-show c4.3xlarge
+----------------------------+------------------------------------------------------+
| Property | Value |
+----------------------------+------------------------------------------------------+
| OS-FLV-DISABLED:disabled | False |
| OS-FLV-EXT-DATA:ephemeral | 0 |
| disk | 40 |
| extra_specs | {"hw:cpu_policy": "dedicated", "hw:numa_nodes": "1"} |
| id | 1d76a225-90c1-4f6f-a59b-000795c33e63 |
| name | c4.3xlarge |
| os-flavor-access:is_public | True |
| ram | 24576 |
| rxtx_factor | 1.0 |
| swap | 8192 |
| vcpus | 12 |
+----------------------------+------------------------------------------------------+
I expect to be able to launch two instances of this flavor on the 32
core host, one contained within each NUMA node.
When I launch two instances, the first succeeds, but the second fails.
The instance xml is attached, along with the system capabilities.
If I change hw:numa_nodes = 2, then I can launch two copies of the
instance.
N.B for the purposes of testing I have disabled all vcpu_pin and
isolcpu settings.
To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1464286/+subscriptions
Follow ups
References