openstack team mailing list archive
Mailing list archive
I've been reading the essex docs on scheduling. It seems the default
is to overcommit CPU resources and to schedule on the system with
highest available ram. I would like to *not* overcommit CPU and to
fill a compute node before scheduling on a new node if I can (thinking
about powering down idle nodes if I can and waking on demand but
that's probably a way off). Essentially reversing that default. It
seems to me this config fragment in nova.conf should do it:
(making some defaults explicit, adding CoreFilter, and changing cost
from -1.0 to 1.0)
I made those changes, restarted the nova-scheduler service and then
launched 100 t1.tiny instances. I have several hundred available
vCPUs with 24 vCPU and 48G of ram per compute node so I expected to
get 24 instances using a total of 12G of ram per node across 5 nodes.
this is not what happened the first 93 instances landed on one node
(which then had committed all it's ram) with the next 7 on another
node. Clearly the cost weighting is working (I have a couple nodes
with 96G of ram and the instances did not go there) but the CoreFilter
doesn't seem to be working.
Is there anything obviously wrong with my config? If not is there a
way to check the runtime configuration to see if it thinks it's
applying the CoreFilter?