openstack team mailing list archive
-
openstack team
-
Mailing list archive
-
Message #12733
Re: Quotas... 1 of 1 instances? What's the deal?
And I opened one 3 weeks ago! :) Just marked the one you did as a duplicate – happy to have it go either way though, no worries.
https://bugs.launchpad.net/nova/+bug/998199
The error message is confusing, largely because the variable names are confusing, I suspect. Anyway, there should be some fixes here for sure, and we probably want a way to do some more robust checking. Basically the issue is that if you fail for any sort of hard limit right now (CPUs, RAM, instance count) it will give the same error message. If we fix up that error message to give the actual numbers, its still not quite right because it could be for something other than the instance count.
From: Daryl Walleck <daryl.walleck@xxxxxxxxxxxxx<mailto:daryl.walleck@xxxxxxxxxxxxx>>
Date: Mon, 4 Jun 2012 22:43:17 +0000
To: Jay Pipes <jaypipes@xxxxxxxxx<mailto:jaypipes@xxxxxxxxx>>, "openstack@xxxxxxxxxxxxxxxxxxx<mailto:openstack@xxxxxxxxxxxxxxxxxxx>" <openstack@xxxxxxxxxxxxxxxxxxx<mailto:openstack@xxxxxxxxxxxxxxxxxxx>>, Kevin Mitchell <kevin.mitchell@xxxxxxxxxxxxx<mailto:kevin.mitchell@xxxxxxxxxxxxx>>
Subject: Re: [Openstack] Quotas... 1 of 1 instances? What's the deal?
Hey Jay,
I'm seeing the same incorrect messaging. From what I've observed, this happens when you exceed your quota. The failure is right but the message is wrong. I opened a bug for this last week.
https://bugs.launchpad.net/nova/+bug/1006218
-------- Original message --------
Subject: [Openstack] Quotas... 1 of 1 instances? What's the deal?
From: Jay Pipes <jaypipes@xxxxxxxxx<mailto:jaypipes@xxxxxxxxx>>
To: "openstack@xxxxxxxxxxxxxxxxxxx<mailto:openstack@xxxxxxxxxxxxxxxxxxx>" <openstack@xxxxxxxxxxxxxxxxxxx<mailto:openstack@xxxxxxxxxxxxxxxxxxx>>,Kevin Mitchell <kevin.mitchell@xxxxxxxxxxxxx<mailto:kevin.mitchell@xxxxxxxxxxxxx>>
CC: [Openstack] Quotas... 1 of 1 instances? What's the deal?
Hi Kevin, Stackers,
In Horizon, my tenant/user clearly says that 10 instances is my quota,
and yet trying to create a single server I'm getting this:
jpipes@uberbox:~/repos/tempest$ nosetests -v --nologcapture
======================================================================
ERROR: test suite for <class
'tempest.tests.compute.test_servers_negative.ServersNegativeTest'>
----------------------------------------------------------------------
Traceback (most recent call last):
File
"/usr/local/lib/python2.7/dist-packages/nose-1.1.2-py2.7.egg/nose/suite.py",
line 208, in run
self.setUp()
File
"/usr/local/lib/python2.7/dist-packages/nose-1.1.2-py2.7.egg/nose/suite.py",
line 291, in setUp
self.setupContext(ancestor)
File
"/usr/local/lib/python2.7/dist-packages/nose-1.1.2-py2.7.egg/nose/suite.py",
line 314, in setupContext
try_run(context, names)
File
"/usr/local/lib/python2.7/dist-packages/nose-1.1.2-py2.7.egg/nose/util.py",
line 478, in try_run
return func()
File
"/home/jpipes/repos/tempest/tempest/tests/compute/test_servers_negative.py",
line 35, in setUpClass
cls.server = cls.create_server()
File "/home/jpipes/repos/tempest/tempest/tests/base_compute_test.py",
line 117, in create_server
server_name, image_id, flavor)
File
"/home/jpipes/repos/tempest/tempest/services/nova/json/servers_client.py",
line 59, in create_server
resp, body = self.post('servers', post_body, self.headers)
File "/home/jpipes/repos/tempest/tempest/common/rest_client.py", line
152, in post
return self.request('POST', url, headers, body)
File "/home/jpipes/repos/tempest/tempest/common/rest_client.py", line
205, in request
raise exceptions.OverLimit(resp_body['overLimit']['message'])
OverLimit: Quota exceeded
Details: Quota exceeded: already used 1 of 1 instances
But there are no instances at all on the box:
jpipes@uberbox:~/repos/tempest$ virsh list --all
Id Name State
----------------------------------
When I check the DB, though, I've seeing the following:
mysql> select project_id, in_use, reserved, until_refresh from
quota_usages where resource = 'instances';
+----------------------------------+--------+----------+---------------+
| project_id | in_use | reserved | until_refresh |
+----------------------------------+--------+----------+---------------+
| 287a92da0cf14a27a43c8737417b029d | 0 | 10 | NULL |
| f0c72dea9fda459aac64de460300e1ec | 0 | 2 | NULL |
+----------------------------------+--------+----------+---------------+
2 rows in set (0.00 sec)
What's the deal here? Tempest needs to create and delete servers in
rapid succession, and it seems the reservation system might not be able
to keep up?
At a minimum, I think that the OverLimit: "Quota exceeded: already used
1 of 1 instances" message should be updated to not be so obviously wrong
with regard to the value of the resource quota itself?
Thanks,
-jay
_______________________________________________
Mailing list: https://launchpad.net/~openstack
Post to : openstack@xxxxxxxxxxxxxxxxxxx<mailto:openstack@xxxxxxxxxxxxxxxxxxx>
Unsubscribe : https://launchpad.net/~openstack
More help : https://help.launchpad.net/ListHelp
_______________________________________________ Mailing list: https://launchpad.net/~openstack Post to : openstack@xxxxxxxxxxxxxxxxxxx<mailto:openstack@xxxxxxxxxxxxxxxxxxx> Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
References