← Back to team overview

yahoo-eng-team team mailing list archive

[Bug 1467927] Re: Odd number of vCPUs breaks 'prefer' threads policy

 

I was unsure how important the CPU topology exposed to the guest was,
but you're correct in saying that using a best-effort prefer policy
would result in bad scheduler decisions. We still have an 'implicit'
separate policy for odd numbers of cores and an implicit 'prefer' policy
for even numbers, but seeing as we don't really support thread policies
yet then this isn't really a bug.

I will close this bug and keep the above in mind when adding support for
the thread policies.

** Changed in: nova
       Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1467927

Title:
  Odd number of vCPUs breaks 'prefer' threads policy

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Using a CPU policy of dedicated ('hw:cpu_policy=dedicated') results in
  vCPUs being pinned to pCPUs, per the original blueprint:

      http://specs.openstack.org/openstack/nova-
  specs/specs/kilo/implemented/virt-driver-cpu-pinning.html

  When scheduling instance with this extra spec there appears to be an
  implicit use of the 'prefer' threads policy, i.e. where possible vCPUs
  are pinned to thread siblings first. This is "implicit" because the
  threads policy aspect of this spec has not yet been implemented.

  However, this implicit 'prefer' policy breaks when a VM with an odd
  number of vCPUs is booted. This has been seen on a Hyper-Threading-
  enabled host where "sibling sets" are two long, but it would
  presumably happen on any host where the number of siblings (or any
  number between this value and one) is not an factor of the number of
  vCPUs (i.e. vCPUs % n != 0, for siblings <= n > 0).

  It is reasonable to assume that a three vCPU VM, for example, should
  try best effort and use siblings for at the first two vCPUs of the VM
  (assuming you're on a host system with HyperThreading and sibling sets
  are of length two). This would give us a true best effort
  implementation.

  ---

  # Testing Configuration

  Testing was conducted on a single-node, Fedora 21-based
  (3.17.8-300.fc21.x86_64) OpenStack instance (built with devstack). The
  system is a dual-socket, 10 core, HT-enabled system (2 sockets * 10
  cores * 2 threads = 40 "pCPUs". 0-9,20-29 = node0, 10-19,30-39 =
  node1). Two flavors were used:

      openstack flavor create --ram 4096 --disk 20 --vcpus 3 demo.odd
      nova flavor-key demo.odd set hw:cpu_policy=dedicated

      openstack flavor create --ram 4096 --disk 20 --vcpus 4 demo.even
      nova flavor-key demo.even set hw:cpu_policy=dedicated

  # Results

  Correct case ("even" number of vCPUs)
  =====================================

  The output from 'virsh dumpxml [ID]' for the four vCPU VM is given
  below. Similar results can be seen for varying "even" numbers of vCPUs
  (2, 4, 10 tested):

      <cputune>
          <shares>4096</shares>
          <vcpupin vcpu='0' cpuset='3'/>
          <vcpupin vcpu='1' cpuset='23'/>
          <vcpupin vcpu='2' cpuset='26'/>
          <vcpupin vcpu='3' cpuset='6'/>
          <emulatorpin cpuset='3,6,23,26'/>
      </cputune>

  Incorrect case ("odd" number of vCPUs)
  ======================================

  The output from 'virsh dumpxml [ID]' for the three vCPU VM is given
  below. Similar results can be seen for varying "odd" numbers of vCPUs
  (3, 5 tested):

      <cputune>
          <shares>3072</shares>
          <vcpupin vcpu='0' cpuset='1'/>
          <vcpupin vcpu='1' cpuset='0'/>
          <vcpupin vcpu='2' cpuset='25'/>
          <emulatorpin cpuset='0-1,25'/>
      </cputune>

  This isn't correct. We would expect something closer to this:

      <cputune>
          <shares>3072</shares>
          <vcpupin vcpu='0' cpuset='0'/>
          <vcpupin vcpu='1' cpuset='20'/>
          <vcpupin vcpu='2' cpuset='1'/>
          <emulatorpin cpuset='0-1,20'/>
      </cputune>

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1467927/+subscriptions


References