← Back to team overview

yahoo-eng-team team mailing list archive

[Bug 1466780] [NEW] nova libvirt pinning not reflected in VirtCPUTopology

 

Public bug reported:

Using a CPU policy of dedicated ('hw:cpu_policy=dedicated') results in
vCPUs being pinned to pCPUs, per the original blueprint:

    http://specs.openstack.org/openstack/nova-
specs/specs/kilo/implemented/virt-driver-cpu-pinning.html

When scheduling instance with this extra spec, it would be expected that
the 'VirtCPUToplogy' object used by 'InstanceNumaCell' objects (which
are in turn used by an 'InstanceNumaTopology' object) should bear some
reflection on the actual configuration. For example, a VM booted with
four vCPUs and the 'dedicated' CPU policy should have NUMA topologies
similar to one of the below:

    VirtCPUTopology(cores=4,sockets=1,threads=1)
    VirtCPUTopology(cores=2,sockets=1,threads=2)
    VirtCPUTopology(cores=1,sockets=2,threads=2)
    ...

In summary, cores * sockets * threads = vCPUs. However, this does not
appear to happen.

---

# Testing Configuration

Testing was conducted on a single-node, Fedora 21-based
(3.17.8-300.fc21.x86_64) OpenStack instance (built with devstack). The
system is a dual-socket, 10 core, HT-enabled system (2 sockets * 10
cores * 2 threads = 40 "pCPUs". 0-9,20-29 = node0, 10-19,30-39 = node1).
Two flavors were used:

    openstack flavor create --ram 4096 --disk 20 --vcpus 10 demo.no-
pinning

    openstack flavor create --ram 4096 --disk 20 --vcpus 10 demo.pinning
    nova flavor-key demo.pinning set hw:cpu_policy=dedicated hw:cpu_threads_policy=separate

# Results

Results vary - however, we have seen very random assignments like so:

For a three vCPU instance:

    (Pdb) p instance.numa_topology.cells[0].cpu_topology
    VirtCPUTopology(cores=10,sockets=1,threads=1)

For a four vCPU instance:

    VirtCPUTopology(cores=2,sockets=1,threads=2)

For a ten vCPU instance:

    VirtCPUTopology(cores=7,sockets=1,threads=2)

The actual underlying libvirt XML is correct, however:

For example, for a three vCPU instance:

    <cputune>
        <shares>3072</shares>
        <vcpupin vcp='0' cpuset='1'/>
        <vcpupin vcp='1' cpuset='0'/>
        <vcpupin vcp='2' cpuset='25'/>
    </cputune>

** Affects: nova
     Importance: Undecided
     Assignee: Stephen Finucane (sfinucan)
         Status: New


** Tags: libvirt numa

** Changed in: nova
     Assignee: (unassigned) => Stephen Finucane (sfinucan)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1466780

Title:
  nova libvirt pinning not reflected in VirtCPUTopology

Status in OpenStack Compute (Nova):
  New

Bug description:
  Using a CPU policy of dedicated ('hw:cpu_policy=dedicated') results in
  vCPUs being pinned to pCPUs, per the original blueprint:

      http://specs.openstack.org/openstack/nova-
  specs/specs/kilo/implemented/virt-driver-cpu-pinning.html

  When scheduling instance with this extra spec, it would be expected
  that the 'VirtCPUToplogy' object used by 'InstanceNumaCell' objects
  (which are in turn used by an 'InstanceNumaTopology' object) should
  bear some reflection on the actual configuration. For example, a VM
  booted with four vCPUs and the 'dedicated' CPU policy should have NUMA
  topologies similar to one of the below:

      VirtCPUTopology(cores=4,sockets=1,threads=1)
      VirtCPUTopology(cores=2,sockets=1,threads=2)
      VirtCPUTopology(cores=1,sockets=2,threads=2)
      ...

  In summary, cores * sockets * threads = vCPUs. However, this does not
  appear to happen.

  ---

  # Testing Configuration

  Testing was conducted on a single-node, Fedora 21-based
  (3.17.8-300.fc21.x86_64) OpenStack instance (built with devstack). The
  system is a dual-socket, 10 core, HT-enabled system (2 sockets * 10
  cores * 2 threads = 40 "pCPUs". 0-9,20-29 = node0, 10-19,30-39 =
  node1). Two flavors were used:

      openstack flavor create --ram 4096 --disk 20 --vcpus 10 demo.no-
  pinning

      openstack flavor create --ram 4096 --disk 20 --vcpus 10 demo.pinning
      nova flavor-key demo.pinning set hw:cpu_policy=dedicated hw:cpu_threads_policy=separate

  # Results

  Results vary - however, we have seen very random assignments like so:

  For a three vCPU instance:

      (Pdb) p instance.numa_topology.cells[0].cpu_topology
      VirtCPUTopology(cores=10,sockets=1,threads=1)

  For a four vCPU instance:

      VirtCPUTopology(cores=2,sockets=1,threads=2)

  For a ten vCPU instance:

      VirtCPUTopology(cores=7,sockets=1,threads=2)

  The actual underlying libvirt XML is correct, however:

  For example, for a three vCPU instance:

      <cputune>
          <shares>3072</shares>
          <vcpupin vcp='0' cpuset='1'/>
          <vcpupin vcp='1' cpuset='0'/>
          <vcpupin vcp='2' cpuset='25'/>
      </cputune>

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1466780/+subscriptions


Follow ups

References