yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #89731
[Bug 1990238] [NEW] numa_nodes=1 pined to wrong numa node
Public bug reported:
There is a compute-node which have two numa nodes, each numa node have 2
cpus and 8G memory, humactl give
# numactl -H
available: 2 nodes (0-1)
node 0 cpus: 0 1
node 0 size: 7574 MB
node 0 free: 5049 MB
node 1 cpus: 2 3
node 1 size: 7874 MB
node 1 free: 7150 MB
node distances:
node 0 1
0: 10 20
1: 20 10
nova.conf
[DEFAULT]
ram_allocation_ratio = 1.0
[compute]
packing_host_numa_cells_allocation_strategy = True
with the above configurations, the nova-compute node try use one numa
node until it exhausted, then pick next numa node. but it's not!
here is the steps to reproduce it
openstack flavor create --vcpus 2 --ram 6144 --disk 0 --property hw:numa_nodes=1 2c.6g
nova boot --image centos --flavor 2c.6g --host <this numa node> vm1
nova boot --image centos --flavor 2c.6g --host <this numa node> vm2
the flavor have 6G memory, so for this compute-node, each numa node can
only have 1 vm. but the actual result is
# virsh list
Id Name State
-----------------------------------
21 instance-00000056 running
22 instance-00000057 running
# virsh numatune 21
numa_mode : strict
numa_nodeset : 0
# virsh numatune 22
numa_mode : strict
numa_nodeset : 0
both vm pinned to numa node 0! even numa node 0 do not have sufficient
memory!
** Affects: nova
Importance: Undecided
Assignee: Junbo Jiang (junbo)
Status: New
** Changed in: nova
Assignee: (unassigned) => junbo (junbo)
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1990238
Title:
numa_nodes=1 pined to wrong numa node
Status in OpenStack Compute (nova):
New
Bug description:
There is a compute-node which have two numa nodes, each numa node have
2 cpus and 8G memory, humactl give
# numactl -H
available: 2 nodes (0-1)
node 0 cpus: 0 1
node 0 size: 7574 MB
node 0 free: 5049 MB
node 1 cpus: 2 3
node 1 size: 7874 MB
node 1 free: 7150 MB
node distances:
node 0 1
0: 10 20
1: 20 10
nova.conf
[DEFAULT]
ram_allocation_ratio = 1.0
[compute]
packing_host_numa_cells_allocation_strategy = True
with the above configurations, the nova-compute node try use one numa
node until it exhausted, then pick next numa node. but it's not!
here is the steps to reproduce it
openstack flavor create --vcpus 2 --ram 6144 --disk 0 --property hw:numa_nodes=1 2c.6g
nova boot --image centos --flavor 2c.6g --host <this numa node> vm1
nova boot --image centos --flavor 2c.6g --host <this numa node> vm2
the flavor have 6G memory, so for this compute-node, each numa node
can only have 1 vm. but the actual result is
# virsh list
Id Name State
-----------------------------------
21 instance-00000056 running
22 instance-00000057 running
# virsh numatune 21
numa_mode : strict
numa_nodeset : 0
# virsh numatune 22
numa_mode : strict
numa_nodeset : 0
both vm pinned to numa node 0! even numa node 0 do not have sufficient
memory!
To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1990238/+subscriptions