yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #24212
[Bug 1387401] [NEW] token_flush can hang if lots of tokens
Public bug reported:
If you've got a system that can generate lots of tokens, token_flush can hang. For DB2, this happens if you create > 100 tokens in a second (for mysql it's 1000 tokens in a second). The query to get the time to delete returns the 100th timestamp which is the same as the min timestamp, and then it goes to delete < min timestamp and none match, so none are deleted, then it gets stuck in a loop since the function always returns the min timestamp.
This could be fixed easily by using <= rather than < for the deletion
comparison.
** Affects: keystone
Importance: Undecided
Assignee: Brant Knudson (blk-u)
Status: New
** Changed in: keystone
Assignee: (unassigned) => Brant Knudson (blk-u)
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1387401
Title:
token_flush can hang if lots of tokens
Status in OpenStack Identity (Keystone):
New
Bug description:
If you've got a system that can generate lots of tokens, token_flush can hang. For DB2, this happens if you create > 100 tokens in a second (for mysql it's 1000 tokens in a second). The query to get the time to delete returns the 100th timestamp which is the same as the min timestamp, and then it goes to delete < min timestamp and none match, so none are deleted, then it gets stuck in a loop since the function always returns the min timestamp.
This could be fixed easily by using <= rather than < for the deletion
comparison.
To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1387401/+subscriptions
Follow ups
References