← Back to team overview

yahoo-eng-team team mailing list archive

[Bug 1429084] [NEW] local_gb in hypervisor statistics is wrong when use rbd

 

Public bug reported:

env:
  two compute node(node-1,node-2)
  node-4 as storage node
  use ceph
  Icehose

description:
#ceph -s

    cluster 97cbee3f-26dc-4f03-aa1c-e4af9xxxxx
     health HEALTH_OK
     monmap e5: 3 mons at {node-1=10.11.0.2:6789/0,node-2=10.11.0.6:6789/0,node-4=10.11.0.7:6789/0}, election epoch 30, quorum 0,1,2 node-1,node-2,node-4
     osdmap e153: 6 osds: 6 up, 6 in
      pgmap v2543: 1216 pgs, 6 pools, 2792 MB data, 404 objects
            5838 MB used, 11163 GB / 11168 GB avail
                1215 active+clean
                   1 active+clean+scrubbing
  client io 0 B/s rd, 2463 B/s wr, 1 op/s

above totoal storage size is “ 11163 GB / 11168 GB”

but when get it through nova API,result as follow:

{"hypervisor_statistics": {"count": 2, "error_vms": 0, "vcpus_used": 6,
"total_vms": 5, "run_vms": 5, "local_gb_used": 0, "memory_mb": 241915,
"current_workload": 0, "vcpus": 48, "running_vms": 5, "free_disk_gb":
22336, "stop_vms": 0, "disk_available_least": 22326, "local_gb": 22336,
"free_ram_mb": 234747, "memory_mb_used": 7168}}

"disk_available_least" is 22326, "local_gb" is 22336

reason:

I hava two compute node, all nodes report itself resource to controller
node, due to each compute node get disk info of ceph cluster  through
rados client, so we will have double “disk_available_least” and
“local_gb”

** Affects: nova
     Importance: Undecided
         Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1429084

Title:
  local_gb in hypervisor statistics is wrong when use rbd

Status in OpenStack Compute (Nova):
  New

Bug description:
  env:
    two compute node(node-1,node-2)
    node-4 as storage node
    use ceph
    Icehose

  description:
  #ceph -s

      cluster 97cbee3f-26dc-4f03-aa1c-e4af9xxxxx
       health HEALTH_OK
       monmap e5: 3 mons at {node-1=10.11.0.2:6789/0,node-2=10.11.0.6:6789/0,node-4=10.11.0.7:6789/0}, election epoch 30, quorum 0,1,2 node-1,node-2,node-4
       osdmap e153: 6 osds: 6 up, 6 in
        pgmap v2543: 1216 pgs, 6 pools, 2792 MB data, 404 objects
              5838 MB used, 11163 GB / 11168 GB avail
                  1215 active+clean
                     1 active+clean+scrubbing
    client io 0 B/s rd, 2463 B/s wr, 1 op/s

  above totoal storage size is “ 11163 GB / 11168 GB”

  but when get it through nova API,result as follow:

  {"hypervisor_statistics": {"count": 2, "error_vms": 0, "vcpus_used":
  6, "total_vms": 5, "run_vms": 5, "local_gb_used": 0, "memory_mb":
  241915, "current_workload": 0, "vcpus": 48, "running_vms": 5,
  "free_disk_gb": 22336, "stop_vms": 0, "disk_available_least": 22326,
  "local_gb": 22336, "free_ram_mb": 234747, "memory_mb_used": 7168}}

  "disk_available_least" is 22326, "local_gb" is 22336

  reason:

  I hava two compute node, all nodes report itself resource to
  controller node, due to each compute node get disk info of ceph
  cluster  through rados client, so we will have double
  “disk_available_least” and “local_gb”

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1429084/+subscriptions


Follow ups

References