← Back to team overview

yahoo-eng-team team mailing list archive

[Bug 1566835] [NEW] Keystone oslo_cache.memcache_pool cache seems not to work properly

 

Public bug reported:

== Abstract ==

During Keystone OSprofiler integration it was a wish to check how does
Keystone was changed from Liberty to Mitaka in regarding DB/Caching
layers workability. There were lots of changes related to federation
support added and because of movement to slo.cache usage.

Ideas of the experiment can be found here:
http://docs.openstack.org/developer/performance-
docs/test_plans/keystone/plan.html

== What was discovered ==

Preliminary results can be found here -
http://docs.openstack.org/developer/performance-
docs/test_results/keystone/all-in-one/index.html

In short: two identical Keystone API calls were done to both Liberty and
Mitaka env. For instance, *user list*. The second call was profiled
using OSprofiler - and compared between Liberty and Mitaka env.

Both env had the same Apache config, the same Keystone cache config:

[cache]
memcache_servers = 10.0.2.15:11211
backend = oslo_cache.memcache_pool
enabled = True
expiration_time = 600

On Liberty all cache calls were "leaves" in requests tree - that is
expected behaviour, as all functions results should be cached in
Memcached backend. On Mitaka caching decorator was applied, but instead
of grabbing needed value by key from the cache backend, full path with
going to the DB was followed.

Liberty call example: http://dinabelova.github.io/liberty_user_list.html
Mitaka call example: http://dinabelova.github.io/mitaka_user_list.html

** Affects: keystone
     Importance: Undecided
         Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1566835

Title:
  Keystone oslo_cache.memcache_pool cache seems not to work properly

Status in OpenStack Identity (keystone):
  New

Bug description:
  == Abstract ==

  During Keystone OSprofiler integration it was a wish to check how does
  Keystone was changed from Liberty to Mitaka in regarding DB/Caching
  layers workability. There were lots of changes related to federation
  support added and because of movement to slo.cache usage.

  Ideas of the experiment can be found here:
  http://docs.openstack.org/developer/performance-
  docs/test_plans/keystone/plan.html

  == What was discovered ==

  Preliminary results can be found here -
  http://docs.openstack.org/developer/performance-
  docs/test_results/keystone/all-in-one/index.html

  In short: two identical Keystone API calls were done to both Liberty
  and Mitaka env. For instance, *user list*. The second call was
  profiled using OSprofiler - and compared between Liberty and Mitaka
  env.

  Both env had the same Apache config, the same Keystone cache config:

  [cache]
  memcache_servers = 10.0.2.15:11211
  backend = oslo_cache.memcache_pool
  enabled = True
  expiration_time = 600

  On Liberty all cache calls were "leaves" in requests tree - that is
  expected behaviour, as all functions results should be cached in
  Memcached backend. On Mitaka caching decorator was applied, but
  instead of grabbing needed value by key from the cache backend, full
  path with going to the DB was followed.

  Liberty call example: http://dinabelova.github.io/liberty_user_list.html
  Mitaka call example: http://dinabelova.github.io/mitaka_user_list.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1566835/+subscriptions


Follow ups