← Back to team overview

yahoo-eng-team team mailing list archive

[Bug 1929390] [NEW] Keystone still caches data in process memory after configuring caching backend to dogpile.cache.null

 

Public bug reported:

I have a keystone cluster running on 3 controller nodes. On each node,
the process is running under mod_wsgi daemon mode. Keystone version is
16.0.1

I am experiencing a problem which is caused by inconsistency between in-process memory cache data and the database.
Here is the steps to reproduce:
1. create a project with "enabled" attribute set to False
2. assign an user and a role to the project
3. call get_project for 50 times.(in order to fill process caches of this project with "enabled" as False)
4. set the project's "enabled" attribute to True
5. get a token with scope as this project
6. use this token to call any api.
Step 6 will return a 404 error with an error message saying the project is disabled and the token is invalid, while in fact the project is already enabled.

My deployment does not have a caching backend service so I just leave the caching configuration as default, which is as follows:
[caching]
enabled = true
backend = dogpile.cache.null

[resource]
caching = true

...

According to the documentation, configuring caching.backend as
dogpile.cache.null should "effectively disables all cache operations",
but the actual behavior is that there's still an in-memory cache
layer(common.cache._context_cache._ResponseCacheProxy) wrapped around
the configured backend, which does not sync between processes and
nodes.One can only disable this cache by setting the caching switch to
false.

Here is my concerns:
1. The documentaion is misleading, either the document or the code should be modified to keep them consistent.
2. Will keystone still suffer from such data inconsistency problem even after a proper caching backend is configured(memcache for example)? Because _ResponseCacheProxy.get() tries to get value from "local cache" before querying from the caching backend:

def get(self, key):
    value = self._get_local_cache(key)
    if value is api.NO_VALUE:
        value = self.proxied.get(key)
        if value is not api.NO_VALUE:
            self._set_local_cache(key, value)
    return value

** Affects: keystone
     Importance: Undecided
         Status: New


** Tags: caching

** Tags added: caching

** Description changed:

  I have a keystone cluster running on 3 controller nodes. On each node,
  the process is running under mod_wsgi daemon mode. Keystone version is
  16.0.1
  
  I am experiencing a problem which is caused by inconsistency between in-process memory cache data and the database.
- Here is the steps to reproduct:
+ Here is the steps to reproduce:
  1. create a project with "enabled" attribute set to False
  2. assign an user and a role to the project
  3. call get_project for 50 times.(in order to fill process caches of this project with "enabled" as False)
  4. set the project's "enabled" attribute to True
  5. get a token with scope as this project
  6. use this token to call any api.
  Step 6 will return a 404 error with an error message saying the project is disabled and the token is invalid, while in fact the project is already enabled.
  
  My deployment does not have a caching backend service so I just leave the caching configuration as default, which is as follows:
  [caching]
  enabled = true
  backend = dogpile.cache.null
  
  [resource]
  caching = true
  
  ...
  
  According to the documentation, configuring caching.backend as
  dogpile.cache.null should "effectively disables all cache operations",
  but the actual behavior is that there's still an in-memory cache
  layer(common.cache._context_cache._ResponseCacheProxy) wrapped around
  the configured backend, which does not sync between processes and
  nodes.One can only disable this cache by setting the caching switch to
  false.
  
  Here is my concerns:
  1. The documentaion is misleading, either the document or the code should be modified to keep them consistent.
  2. Will keystone still suffer from such data inconsistency problem even after a proper caching backend is configured(memcache for example)? Because _ResponseCacheProxy.get() tries to get value from "local cache" before querying from the caching backend:
  
  def get(self, key):
-     value = self._get_local_cache(key)
-     if value is api.NO_VALUE:
-         value = self.proxied.get(key)
-         if value is not api.NO_VALUE:
-             self._set_local_cache(key, value)
-     return value
+     value = self._get_local_cache(key)
+     if value is api.NO_VALUE:
+         value = self.proxied.get(key)
+         if value is not api.NO_VALUE:
+             self._set_local_cache(key, value)
+     return value

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1929390

Title:
  Keystone still caches data in process memory after configuring caching
  backend to dogpile.cache.null

Status in OpenStack Identity (keystone):
  New

Bug description:
  I have a keystone cluster running on 3 controller nodes. On each node,
  the process is running under mod_wsgi daemon mode. Keystone version is
  16.0.1

  I am experiencing a problem which is caused by inconsistency between in-process memory cache data and the database.
  Here is the steps to reproduce:
  1. create a project with "enabled" attribute set to False
  2. assign an user and a role to the project
  3. call get_project for 50 times.(in order to fill process caches of this project with "enabled" as False)
  4. set the project's "enabled" attribute to True
  5. get a token with scope as this project
  6. use this token to call any api.
  Step 6 will return a 404 error with an error message saying the project is disabled and the token is invalid, while in fact the project is already enabled.

  My deployment does not have a caching backend service so I just leave the caching configuration as default, which is as follows:
  [caching]
  enabled = true
  backend = dogpile.cache.null

  [resource]
  caching = true

  ...

  According to the documentation, configuring caching.backend as
  dogpile.cache.null should "effectively disables all cache operations",
  but the actual behavior is that there's still an in-memory cache
  layer(common.cache._context_cache._ResponseCacheProxy) wrapped around
  the configured backend, which does not sync between processes and
  nodes.One can only disable this cache by setting the caching switch to
  false.

  Here is my concerns:
  1. The documentaion is misleading, either the document or the code should be modified to keep them consistent.
  2. Will keystone still suffer from such data inconsistency problem even after a proper caching backend is configured(memcache for example)? Because _ResponseCacheProxy.get() tries to get value from "local cache" before querying from the caching backend:

  def get(self, key):
      value = self._get_local_cache(key)
      if value is api.NO_VALUE:
          value = self.proxied.get(key)
          if value is not api.NO_VALUE:
              self._set_local_cache(key, value)
      return value

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1929390/+subscriptions