yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #04008
[Bug 1182481] Re: Keystone causing high mySQL usage
[Expired for Keystone because there has been no activity for 60 days.]
** Changed in: keystone
Status: Incomplete => Expired
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1182481
Title:
Keystone causing high mySQL usage
Status in OpenStack Identity (Keystone):
Expired
Bug description:
I used this install guide: https://github.com/mseknibilel/OpenStack-
Grizzly-Install-
Guide/blob/OVS_MultiNode/OpenStack_Grizzly_Install_Guide.rst
(Grizzly)
1 controller/network node and 4 compute nodes running on Ubuntu 13.04 server 64bit. I have keystone running on the controller with a local mysql database (15K SAS drives).
As you can see below mysql is constantly running at 100% cpu utilization. Any api call that needs keystone authentication results in this.
10655 mysql 20 0 3970m 554m 7740 S 101.7 3.5 75:53.22 /usr/sbin/mysqld
API calls take 5-8 seconds to return a value. A simple nova list takes
5 seconds. I have checked the keystone logs and there is nothing
getting logged. I enabled slow query logging in the my.cnf and noticed
the following:
================================================================================================
SET timestamp=1368712775;
SELECT token.id AS token_id, token.expires AS token_expires, token.extra AS token_extra, token.valid AS token_valid, token.user_id AS token_user_id, token.trust_id AS token_trust_id
FROM token
WHERE token.expires > '2013-05-16 13:59:32' AND token.valid = 0;
# Time: 130516 9:59:39
# User@Host: keystone[keystone] @ [10.70.0.1]
# Query_time: 3.804186 Lock_time: 0.000048 Rows_sent: 3 Rows_examined: 117374
SET timestamp=1368712779;
SELECT token.id AS token_id, token.expires AS token_expires, token.extra AS token_extra, token.valid AS token_valid, token.user_id AS token_user_id, token.trust_id AS token_trust_id
FROM token
WHERE token.expires > '2013-05-16 13:59:36' AND token.valid = 0;
# Time: 130516 9:59:43
# User@Host: keystone[keystone] @ [10.70.0.1]
# Query_time: 3.800959 Lock_time: 0.000047 Rows_sent: 3 Rows_examined: 117375
SET timestamp=1368712783;
SELECT token.id AS token_id, token.expires AS token_expires, token.extra AS token_extra, token.valid AS token_valid, token.user_id AS token_user_id, token.trust_id AS token_trust_id
FROM token
WHERE token.expires > '2013-05-16 13:59:39' AND token.valid = 0;
# Time: 130516 9:59:47
# User@Host: keystone[keystone] @ [10.70.0.1]
# Query_time: 3.810399 Lock_time: 0.000048 Rows_sent: 3 Rows_examined: 117377
SET timestamp=1368712787;
SELECT token.id AS token_id, token.expires AS token_expires, token.extra AS token_extra, token.valid AS token_valid, token.user_id AS token_user_id, token.trust_id AS token_trust_id
FROM token
WHERE token.expires > '2013-05-16 13:59:43' AND token.valid = 0;
# Time: 130516 9:59:51
# User@Host: keystone[keystone] @ [10.70.0.1]
# Query_time: 3.871686 Lock_time: 0.000042 Rows_sent: 3 Rows_examined: 117377
SET timestamp=1368712791;
SELECT token.id AS token_id, token.expires AS token_expires, token.extra AS token_extra, token.valid AS token_valid, token.user_id AS token_user_id, token.trust_id AS token_trust_id
FROM token
WHERE token.expires > '2013-05-16 13:59:47' AND token.valid = 0;
# Time: 130516 9:59:55
# User@Host: keystone[keystone] @ [10.70.0.1]
# Query_time: 3.928294 Lock_time: 0.000050 Rows_sent: 3 Rows_examined: 117379
SET timestamp=1368712795;
SELECT token.id AS token_id, token.expires AS token_expires, token.extra AS token_extra, token.valid AS token_valid, token.user_id AS token_user_id, token.trust_id AS token_trust_id
FROM token
WHERE token.expires > '2013-05-16 13:59:51' AND token.valid = 0;
================================================================================================
As you can see the same query is running over and over taking about 4
seconds to complete. This is not a large OpenStack install at all. I
have 3 Project with less than 20 Instances total.
Please help me determine if this is a bug or a configuration issue.
================================================================================================
keystone.conf
[DEFAULT]
log_file = keystone.log
# The directory to keep log files in (will be prepended to --logfile)
log_dir = /var/log/keystone
[sql]
connection = mysql://keystone:pass@10.70.0.1/keystone
# the timeout before idle sql connections are reaped
idle_timeout = 200
[identity]
driver = keystone.identity.backends.sql.Identity
[trust]
driver = keystone.trust.backends.sql.Trust
[catalog]
driver = keystone.catalog.backends.sql.Catalog
[token]
driver = keystone.token.backends.sql.Token
[policy]
driver = keystone.policy.backends.sql.Policy
[ec2]
driver = keystone.contrib.ec2.backends.sql.Ec2
[ssl]
[signing]
[ldap]
[auth]
methods = password,token
password = keystone.auth.plugins.password.Password
token = keystone.auth.plugins.token.Token
[filter:debug]
paste.filter_factory = keystone.common.wsgi:Debug.factory
[filter:token_auth]
paste.filter_factory = keystone.middleware:TokenAuthMiddleware.factory
[filter:admin_token_auth]
paste.filter_factory = keystone.middleware:AdminTokenAuthMiddleware.factory
[filter:xml_body]
paste.filter_factory = keystone.middleware:XmlBodyMiddleware.factory
[filter:json_body]
paste.filter_factory = keystone.middleware:JsonBodyMiddleware.factory
[filter:user_crud_extension]
paste.filter_factory = keystone.contrib.user_crud:CrudExtension.factory
[filter:crud_extension]
paste.filter_factory = keystone.contrib.admin_crud:CrudExtension.factory
[filter:ec2_extension]
paste.filter_factory = keystone.contrib.ec2:Ec2Extension.factory
[filter:s3_extension]
paste.filter_factory = keystone.contrib.s3:S3Extension.factory
[filter:url_normalize]
paste.filter_factory = keystone.middleware:NormalizingFilter.factory
[filter:sizelimit]
paste.filter_factory = keystone.middleware:RequestBodySizeLimiter.factory
[filter:stats_monitoring]
paste.filter_factory = keystone.contrib.stats:StatsMiddleware.factory
[filter:stats_reporting]
paste.filter_factory = keystone.contrib.stats:StatsExtension.factory
[filter:access_log]
paste.filter_factory = keystone.contrib.access:AccessLogMiddleware.factory
[app:public_service]
paste.app_factory = keystone.service:public_app_factory
[app:service_v3]
paste.app_factory = keystone.service:v3_app_factory
[app:admin_service]
paste.app_factory = keystone.service:admin_app_factory
[pipeline:public_api]
pipeline = access_log sizelimit stats_monitoring url_normalize token_auth admin_token_auth xml_body json_body debug ec2_extension user_crud_extension public_service
[pipeline:admin_api]
pipeline = access_log sizelimit stats_monitoring url_normalize token_auth admin_token_auth xml_body json_body debug stats_reporting ec2_extension s3_extension crud_extension admin_service
[pipeline:api_v3]
pipeline = access_log sizelimit stats_monitoring url_normalize token_auth admin_token_auth xml_body json_body debug stats_reporting ec2_extension s3_extension service_v3
[app:public_version_service]
paste.app_factory = keystone.service:public_version_app_factory
[app:admin_version_service]
paste.app_factory = keystone.service:admin_version_app_factory
[pipeline:public_version_api]
pipeline = access_log sizelimit stats_monitoring url_normalize xml_body public_version_service
[pipeline:admin_version_api]
pipeline = access_log sizelimit stats_monitoring url_normalize xml_body admin_version_service
[composite:main]
use = egg:Paste#urlmap
/v2.0 = public_api
/v3 = api_v3
/ = public_version_api
[composite:admin]
use = egg:Paste#urlmap
/v2.0 = admin_api
/v3 = api_v3
/ = admin_version_api
================================================================================================
Also note that I have an Essex environment up and running with similar
equipment with no mysql performance issues.
Thanks in advance,
Adam
To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1182481/+subscriptions