yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #70411
[Bug 1742826] Re: Nova reports wrong quota usage
There is a well-known issue with quotas "going out of sync" in Nova
versions Ocata and earlier and is why the 'nova-manage project
quota_usage_refresh' command existed. Quotas out-of-sync means that the
quota_usages do not match the actual resources being consumed. This can
occur due to races while restarting nova-compute, etc.
Quota usage is synced up with actual consumption per request for a
project/user BUT this will not catch the case where user A has quota
usage out of sync in project 1 and user B tries to boot an instance in
project 1. User B can be blocked from creating an instance because of
user A's unsynced quota usage.
In that case, the way to sync up everything is to run the 'nova-manage
project quota_usage_refresh' command for every project/user that has
usage out-of-sync. Some operators do this in a cron job.
Starting in Pike, the quota_usages and reservations tables are no longer
used and actual resource usage is counted on-the-fly instead of being
tracked separately as 'quota_usages'. So, in Pike and onward, it's no
longer possible for quota usage to get out-of-sync.
Unfortunately for Ocata and earlier, we can't fix the fundamental design
of the quota_usages and the out-of-sync issues have to be resolved
through use of the 'nova-manage project quota_usage_refresh' command.
** Changed in: nova
Status: New => Won't Fix
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1742826
Title:
Nova reports wrong quota usage
Status in OpenStack Compute (nova):
Won't Fix
Bug description:
(originally reported by David Manchado in
https://bugzilla.redhat.com/show_bug.cgi?id=1528643 )
Description of problem:
Nova reports unaccurate quota usage. This can even lead to prevent spawning new instances when the project should have still room for more resources
Version-Release number of selected component (if applicable):
Ocata.
Nova related RPMs:
openstack-nova-scheduler-15.0.9-0.20171201203754.bbfc423.el7.centos.noarch
openstack-nova-console-15.0.9-0.20171201203754.bbfc423.el7.centos.noarch
python-nova-15.0.9-0.20171201203754.bbfc423.el7.centos.noarch
openstack-nova-conductor-15.0.9-0.20171201203754.bbfc423.el7.centos.noarch
puppet-nova-10.4.2-0.20171127233709.eb1fafa.el7.centos.noarch
openstack-nova-api-15.0.9-0.20171201203754.bbfc423.el7.centos.noarch
openstack-nova-compute-15.0.9-0.20171201203754.bbfc423.el7.centos.noarch
openstack-nova-placement-api-15.0.9-0.20171201203754.bbfc423.el7.centos.noarch
openstack-nova-cert-15.0.9-0.20171201203754.bbfc423.el7.centos.noarch
openstack-nova-common-15.0.9-0.20171201203754.bbfc423.el7.centos.noarch
python2-novaclient-7.1.2-1.el7.noarch
openstack-nova-novncproxy-15.0.9-0.20171201203754.bbfc423.el7.centos.noarch
How reproducible:
Not sure. We got to this situation during an upgrade from Newton to Ocata.
I guess it might be due to instance deletion while some services like galera and/or rabbit were not behaving properly
Actual results:
Nova reports a given project to be using 39 instances while openstack server list reports 17.
openstack limits show --absolute --project XXXX | grep Instances
| maxTotalInstances | 48 |
| totalInstancesUsed | 39 |
openstack server list --project XXXX --format csv | wc -l
18 (note there is an extra line for the csv header)
Expected results:
Match openstack server list (accurate) and openstack limits show (unaccurate)
Additional info:
While doing some troubleshooting on nova.quota_usages I have found several projects and resources defined more than once.
SELECT * FROM (SELECT project_id,resource,COUNT(*) times FROM nova.quota_usages GROUP BY project_id, resource) as T WHERE times > 1;
To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1742826/+subscriptions
References