← Back to team overview

yahoo-eng-team team mailing list archive

[Bug 1417723] [NEW] when using dedicated cpus, the guest topology doesn't match the host

 

Public bug reported:

According to "http://specs.openstack.org/openstack/nova-
specs/specs/juno/approved/virt-driver-cpu-pinning.html", the topology of
the guest is set up as follows:

"In the absence of an explicit vCPU topology request, the virt drivers
typically expose all vCPUs as sockets with 1 core and 1 thread. When
strict CPU pinning is in effect the guest CPU topology will be setup to
match the topology of the CPUs to which it is pinned."

What I'm seeing is that when strict CPU pinning is in use the guest
seems to be configuring multiple threads, even if the host doesn't have
theading enabled.

As an example, I set up a flavor with 2 vCPUs and enabled dedicated
cpus.  I then booted up an instance of this flavor on two separate
compute nodes, one with hyperthreading enabled and one with
hyperthreading disabled.  In both cases, "virsh dumpxml" gave the
following topology:

<topology sockets='1' cores='1' threads='2'/>

When running on the system with hyperthreading disabled, this should
presumably have been set to "cores=2 threads=1".

Taking this a bit further, even if hyperthreading is enabled on the host
it would be more accurate to only specify multiple threads in the guest
topology if the vCPUs are actually affined to multiple threads of the
same host core.  Otherwise it would be more accurate to specify the
guest topology with multiple cores of one thread each.

** Affects: nova
     Importance: Undecided
         Status: New


** Tags: compute

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1417723

Title:
  when using dedicated cpus, the guest topology doesn't match the host

Status in OpenStack Compute (Nova):
  New

Bug description:
  According to "http://specs.openstack.org/openstack/nova-
  specs/specs/juno/approved/virt-driver-cpu-pinning.html", the topology
  of the guest is set up as follows:

  "In the absence of an explicit vCPU topology request, the virt drivers
  typically expose all vCPUs as sockets with 1 core and 1 thread. When
  strict CPU pinning is in effect the guest CPU topology will be setup
  to match the topology of the CPUs to which it is pinned."

  What I'm seeing is that when strict CPU pinning is in use the guest
  seems to be configuring multiple threads, even if the host doesn't
  have theading enabled.

  As an example, I set up a flavor with 2 vCPUs and enabled dedicated
  cpus.  I then booted up an instance of this flavor on two separate
  compute nodes, one with hyperthreading enabled and one with
  hyperthreading disabled.  In both cases, "virsh dumpxml" gave the
  following topology:

  <topology sockets='1' cores='1' threads='2'/>

  When running on the system with hyperthreading disabled, this should
  presumably have been set to "cores=2 threads=1".

  Taking this a bit further, even if hyperthreading is enabled on the
  host it would be more accurate to only specify multiple threads in the
  guest topology if the vCPUs are actually affined to multiple threads
  of the same host core.  Otherwise it would be more accurate to specify
  the guest topology with multiple cores of one thread each.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1417723/+subscriptions


Follow ups

References