yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #49394
[Bug 1569997] [NEW] heat stack delete fails when token expires
Public bug reported:
If i create a heat stack and happen to delete the stack at a time when
the keystone token is expiring the heat stack delete fails. I
originally found this when deleting a very large heat stack, but you can
repo it by changing your keystone expiration to 300 seconds.
1. create a stack with 5 servers and some volumes
2. change the keystone.conf and restart keystone service
change to 5 minutes
[token]
expiration = 300
3. log on Horizon and wait to the 4th minute and then
trigger stack-delete from horizon.
4 stack-delete failed, detailed logs are as attached:
This time it's deleting nova server error. So this problem is not cinder specific, should be a problem with other resources as well.
804 2016-04-13 08:51:47.725 1401 INFO heat.engine.resource [-] DELETE: TemplateResource "instance" [092fc06e-0e84-43c2-b247 -e4f6b4a41a93] Stack "test" [a74ccdd9-dc00-441a-a5e5-7bb964b1ed81]
805 2016-04-13 08:51:47.725 1401 TRACE heat.engine.resource Traceback (most recent call last):
806 2016-04-13 08:51:47.725 1401 TRACE heat.engine.resource File "/usr/lib/python2.7/dist-packages/heat/engine/resource.p y", line 544, in _action_recorder
807 2016-04-13 08:51:47.725 1401 TRACE heat.engine.resource yield
808 2016-04-13 08:51:47.725 1401 TRACE heat.engine.resource File "/usr/lib/python2.7/dist-packages/heat/engine/resource.p y", line 988, in delete
809 2016-04-13 08:51:47.725 1401 TRACE heat.engine.resource yield self.action_handler_task(action, *action_args)
810 2016-04-13 08:51:47.725 1401 TRACE heat.engine.resource File "/usr/lib/python2.7/dist-packages/heat/engine/scheduler. py", line 313, in wrapper
811 2016-04-13 08:51:47.725 1401 TRACE heat.engine.resource step = next(subtask)
812 2016-04-13 08:51:47.725 1401 TRACE heat.engine.resource File "/usr/lib/python2.7/dist-packages/heat/engine/resource.p y", line 588, in action_handler_task
813 2016-04-13 08:51:47.725 1401 TRACE heat.engine.resource while not check(handler_data):
814 2016-04-13 08:51:47.725 1401 TRACE heat.engine.resource File "/usr/lib/python2.7/dist-packages/heat/engine/resources/ stack_resource.py", line 429, in check_delete_complete
815 2016-04-13 08:51:47.725 1401 TRACE heat.engine.resource show_deleted=True)
816 2016-04-13 08:51:47.725 1401 TRACE heat.engine.resource File "/usr/lib/python2.7/dist-packages/heat/engine/resources/ stack_resource.py", line 340, in _check_status_complete
817 2016-04-13 08:51:47.725 1401 TRACE heat.engine.resource action=action)
818 2016-04-13 08:51:47.725 1401 TRACE heat.engine.resource ResourceFailure: BadRequest: resources.instance.resources.serve r5: Expecting to find username or userId in passwordCredentials - the server could not comply with the request since it is either malformed or otherwise incorrect. The client is assumed to be in error. (HTTP 400)
** Affects: horizon
Importance: Undecided
Status: New
** Attachment added: "error.log"
https://bugs.launchpad.net/bugs/1569997/+attachment/4635829/+files/error.log
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1569997
Title:
heat stack delete fails when token expires
Status in OpenStack Dashboard (Horizon):
New
Bug description:
If i create a heat stack and happen to delete the stack at a time when
the keystone token is expiring the heat stack delete fails. I
originally found this when deleting a very large heat stack, but you
can repo it by changing your keystone expiration to 300 seconds.
1. create a stack with 5 servers and some volumes
2. change the keystone.conf and restart keystone service
change to 5 minutes
[token]
expiration = 300
3. log on Horizon and wait to the 4th minute and then
trigger stack-delete from horizon.
4 stack-delete failed, detailed logs are as attached:
This time it's deleting nova server error. So this problem is not cinder specific, should be a problem with other resources as well.
804 2016-04-13 08:51:47.725 1401 INFO heat.engine.resource [-] DELETE: TemplateResource "instance" [092fc06e-0e84-43c2-b247 -e4f6b4a41a93] Stack "test" [a74ccdd9-dc00-441a-a5e5-7bb964b1ed81]
805 2016-04-13 08:51:47.725 1401 TRACE heat.engine.resource Traceback (most recent call last):
806 2016-04-13 08:51:47.725 1401 TRACE heat.engine.resource File "/usr/lib/python2.7/dist-packages/heat/engine/resource.p y", line 544, in _action_recorder
807 2016-04-13 08:51:47.725 1401 TRACE heat.engine.resource yield
808 2016-04-13 08:51:47.725 1401 TRACE heat.engine.resource File "/usr/lib/python2.7/dist-packages/heat/engine/resource.p y", line 988, in delete
809 2016-04-13 08:51:47.725 1401 TRACE heat.engine.resource yield self.action_handler_task(action, *action_args)
810 2016-04-13 08:51:47.725 1401 TRACE heat.engine.resource File "/usr/lib/python2.7/dist-packages/heat/engine/scheduler. py", line 313, in wrapper
811 2016-04-13 08:51:47.725 1401 TRACE heat.engine.resource step = next(subtask)
812 2016-04-13 08:51:47.725 1401 TRACE heat.engine.resource File "/usr/lib/python2.7/dist-packages/heat/engine/resource.p y", line 588, in action_handler_task
813 2016-04-13 08:51:47.725 1401 TRACE heat.engine.resource while not check(handler_data):
814 2016-04-13 08:51:47.725 1401 TRACE heat.engine.resource File "/usr/lib/python2.7/dist-packages/heat/engine/resources/ stack_resource.py", line 429, in check_delete_complete
815 2016-04-13 08:51:47.725 1401 TRACE heat.engine.resource show_deleted=True)
816 2016-04-13 08:51:47.725 1401 TRACE heat.engine.resource File "/usr/lib/python2.7/dist-packages/heat/engine/resources/ stack_resource.py", line 340, in _check_status_complete
817 2016-04-13 08:51:47.725 1401 TRACE heat.engine.resource action=action)
818 2016-04-13 08:51:47.725 1401 TRACE heat.engine.resource ResourceFailure: BadRequest: resources.instance.resources.serve r5: Expecting to find username or userId in passwordCredentials - the server could not comply with the request since it is either malformed or otherwise incorrect. The client is assumed to be in error. (HTTP 400)
To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1569997/+subscriptions
Follow ups