yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #46872
[Bug 1549516] Re: Too many reconnections to the SQLalchemy engine
for keystone :)
** Changed in: keystone
Status: New => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1549516
Title:
Too many reconnections to the SQLalchemy engine
Status in OpenStack Identity (keystone):
Invalid
Status in oslo.db:
New
Bug description:
=== Issue Description ===
It looks like for every DB request oslo.db is reconnecting to the
SQLalchemy engine, that leads to "SELECT 1" request to the database
per every meaningful request.
=== Prelude(<DinaBelova>) ===
I was testing osprofiler library (OpenStack profiler) changes, that
are currently on review for Nova, Neutron and Keystone + OSprofiler
integration, trying to perform nova-boot requests. After generating
the trace for this request, I got the following html report:
https://dinabelova.github.io/nova-boot-keystone-cache-turn-on.html .
Total number of DB operations done for this request is 417, that seems
too much for the instance creation. Half of these requests is "SELECT
1" requests, that are used by oslo.db per engine connection via
_connect_ping_listener function -
https://github.com/openstack/oslo.db/blob/master/oslo_db/sqlalchemy/engines.py#L53
I ensured that all of these requests are coming from this method via
adding _connect_ping_listener tracing https://dinabelova.github.io
/nova-boot-oslodb-ping-listener-profiled.html - so we can see that all
"SELECT 1" requests are placed under db_ping_listener section in the
trace.
These "SELECT 1"s are in fact spending 1/3 of all time SQLalchemy
engine in oslo.db is spending on all requests. This seems to be a bug.
=== Env description & spets to reproduce ===
I have devstack environment with latest 1.1.0 osprofiler installed. To
install profiler on the devstack env I used the following additions to
the local.conf:
enable_plugin ceilometer https://git.openstack.org/openstack/ceilometer master
enable_plugin osprofiler https://git.openstack.org/openstack/osprofiler master
Additionally I've used the following changes:
- Nova: https://review.openstack.org/254703
- Nova client: https://review.openstack.org/#/c/254699/
- Neutron: https://review.openstack.org/273951
- Neutron client: https://review.openstack.org/281508
- Keystone: https://review.openstack.org/103368
Also I've modified standard keystone.conf to turn memcache caching:
[cache]
memcache_servers = 127.0.0.1:11211
backend = oslo_cache.memcache_pool
enabled = True
Then you can simply run nova --profile SECRET_KEY boot --image
<image_id> --flavor 42 vm1 to generate all notifications and then
osprofiler trace show --html <trace_id> --out nova-boot.html using the
trace id printed in the bottom of nova boot output.
To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1549516/+subscriptions