yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #64714
[Bug 1697219] [NEW] VMWareVCDriver Incorrectly reports total datastore capacity
Public bug reported:
Description
===========
I'm surprised not to have found reference to this already, so forgive me if this is just user error.
It seems that when using the VMWareVCDriver and your VMWare cluster has
multiple datastores, the total storage capacity reported to Nova is that
of the selected datastore instead of the combined capacity of all valid
datastores.
Further, in the logs i see nova.compute.resource_tracker lists phys_disk
as the total capacity of the selected datastore, but used_disk seems to
be the total used storage across all datastores (or possibly the total
storage consumed by deployed instances? I'm not sure how that figure is
calculated).
The end result is that with, say, two 6TB datastores, once you have more
than 3TB of data on each of them, your total used_disk figure is greater
than the total capacity returned by phys_disk which results in the
cluster being an invalid destination and you can no longer start
instances. However, in reality you still have almost 6TB of available
storage so it should be fine.
Assuming this is valid, and i haven't done something stupid somewhere, i
think there are two possible solutions:
1. Correct the total phys_disk figure to account for all valid datastores
2. Adjust the used_disk figure to only return the used amount for the selected datastore
Steps to reproduce
==================
To reproduce, set up an environment using vCenter. Provision more than one datastore and then begin filling them. If you had two 100 GB data stores, then filling them both beyond 50GB should reproduce the problem.
Expected result
===============
I would expect that for a given cluster the total storage capacity would be the sum of the capacities of the valid datastores.
Actual result
=============
The actual result is that the total capacity reported is that of the datastore with the largest amount of free space (and that matches any regex if it's provided).
Environment
===========
1. Exact version of OpenStack you are running.
dpkg -l | grep nova
ii nova-common 2:15.0.2-0ubuntu1~cloud0 all OpenStack Compute - common files
ii nova-compute 2:15.0.2-0ubuntu1~cloud0 all OpenStack Compute - compute node base
ii nova-compute-kvm 2:15.0.2-0ubuntu1~cloud0 all OpenStack Compute - compute node (KVM)
ii nova-compute-libvirt 2:15.0.2-0ubuntu1~cloud0 all OpenStack Compute - compute node libvirt support
ii python-nova 2:15.0.2-0ubuntu1~cloud0 all OpenStack Compute Python libraries
2. Which hypervisor did you use?
vCenter 6.5 + Esxi 6.5
2. Which storage type did you use?
VMWare Datastore VMFS 6
** Affects: nova
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1697219
Title:
VMWareVCDriver Incorrectly reports total datastore capacity
Status in OpenStack Compute (nova):
New
Bug description:
Description
===========
I'm surprised not to have found reference to this already, so forgive me if this is just user error.
It seems that when using the VMWareVCDriver and your VMWare cluster
has multiple datastores, the total storage capacity reported to Nova
is that of the selected datastore instead of the combined capacity of
all valid datastores.
Further, in the logs i see nova.compute.resource_tracker lists
phys_disk as the total capacity of the selected datastore, but
used_disk seems to be the total used storage across all datastores (or
possibly the total storage consumed by deployed instances? I'm not
sure how that figure is calculated).
The end result is that with, say, two 6TB datastores, once you have
more than 3TB of data on each of them, your total used_disk figure is
greater than the total capacity returned by phys_disk which results in
the cluster being an invalid destination and you can no longer start
instances. However, in reality you still have almost 6TB of available
storage so it should be fine.
Assuming this is valid, and i haven't done something stupid somewhere,
i think there are two possible solutions:
1. Correct the total phys_disk figure to account for all valid datastores
2. Adjust the used_disk figure to only return the used amount for the selected datastore
Steps to reproduce
==================
To reproduce, set up an environment using vCenter. Provision more than one datastore and then begin filling them. If you had two 100 GB data stores, then filling them both beyond 50GB should reproduce the problem.
Expected result
===============
I would expect that for a given cluster the total storage capacity would be the sum of the capacities of the valid datastores.
Actual result
=============
The actual result is that the total capacity reported is that of the datastore with the largest amount of free space (and that matches any regex if it's provided).
Environment
===========
1. Exact version of OpenStack you are running.
dpkg -l | grep nova
ii nova-common 2:15.0.2-0ubuntu1~cloud0 all OpenStack Compute - common files
ii nova-compute 2:15.0.2-0ubuntu1~cloud0 all OpenStack Compute - compute node base
ii nova-compute-kvm 2:15.0.2-0ubuntu1~cloud0 all OpenStack Compute - compute node (KVM)
ii nova-compute-libvirt 2:15.0.2-0ubuntu1~cloud0 all OpenStack Compute - compute node libvirt support
ii python-nova 2:15.0.2-0ubuntu1~cloud0 all OpenStack Compute Python libraries
2. Which hypervisor did you use?
vCenter 6.5 + Esxi 6.5
2. Which storage type did you use?
VMWare Datastore VMFS 6
To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1697219/+subscriptions