yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #43184
[Bug 1525002] [NEW] The ironic driver needs to scale back how many errors it traces out
Public bug reported:
The amount of tracing in this n-cpu log is a bit much:
http://logs.openstack.org/93/255793/3/gate/gate-tempest-dsvm-ironic-
agent_ssh/25175ed/logs/screen-n-cpu.txt.gz?level=TRACE
Like these warnings:
http://logs.openstack.org/93/255793/3/gate/gate-tempest-dsvm-ironic-
agent_ssh/25175ed/logs/screen-n-cpu.txt.gz?level=TRACE#_2015-12-10_16_11_48_799
2015-12-10 16:11:48.798 WARNING ironicclient.common.http [req-94077ab9-5adf-4720-9fb8-2e027dde9b72 tempest-BaremetalBasicOps-1285004585 tempest-BaremetalBasicOps-1966762451] Request returned failure status.
2015-12-10 16:11:48.799 WARNING ironicclient.common.http [req-94077ab9-5adf-4720-9fb8-2e027dde9b72 tempest-BaremetalBasicOps-1285004585 tempest-BaremetalBasicOps-1966762451] Error contacting Ironic server: Node 1 is locked by host localhost, please retry after the current operation is completed.
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 150, in inner
return func(*args, **kwargs)
File "/opt/stack/new/ironic/ironic/conductor/manager.py", line 1519, in update_port
purpose='port update') as task:
File "/opt/stack/new/ironic/ironic/conductor/task_manager.py", line 152, in acquire
driver_name=driver_name, purpose=purpose)
File "/opt/stack/new/ironic/ironic/conductor/task_manager.py", line 222, in __init__
self.release_resources()
File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 204, in __exit__
six.reraise(self.type_, self.value, self.tb)
File "/opt/stack/new/ironic/ironic/conductor/task_manager.py", line 203, in __init__
self._lock()
File "/opt/stack/new/ironic/ironic/conductor/task_manager.py", line 243, in _lock
reserve_node()
File "/usr/local/lib/python2.7/dist-packages/retrying.py", line 49, in wrapped_f
return Retrying(*dargs, **dkw).call(f, *args, **kw)
File "/usr/local/lib/python2.7/dist-packages/retrying.py", line 212, in call
raise attempt.get()
File "/usr/local/lib/python2.7/dist-packages/retrying.py", line 247, in get
six.reraise(self.value[0], self.value[1], self.value[2])
File "/usr/local/lib/python2.7/dist-packages/retrying.py", line 200, in call
attempt = Attempt(fn(*args, **kwargs), attempt_number, False)
File "/opt/stack/new/ironic/ironic/conductor/task_manager.py", line 236, in reserve_node
self.node_id)
File "/opt/stack/new/ironic/ironic/objects/node.py", line 260, in reserve
db_node = cls.dbapi.reserve_node(tag, node_id)
File "/opt/stack/new/ironic/ironic/db/sqlalchemy/api.py", line 226, in reserve_node
host=node['reservation'])
NodeLocked: Node 1 is locked by host localhost, please retry after the current operation is completed.
(HTTP 409). Attempt 1 of 61
** Affects: nova
Importance: Low
Status: Confirmed
** Tags: ironic
** Changed in: nova
Importance: Undecided => Low
** Changed in: nova
Status: New => Confirmed
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1525002
Title:
The ironic driver needs to scale back how many errors it traces out
Status in OpenStack Compute (nova):
Confirmed
Bug description:
The amount of tracing in this n-cpu log is a bit much:
http://logs.openstack.org/93/255793/3/gate/gate-tempest-dsvm-ironic-
agent_ssh/25175ed/logs/screen-n-cpu.txt.gz?level=TRACE
Like these warnings:
http://logs.openstack.org/93/255793/3/gate/gate-tempest-dsvm-ironic-
agent_ssh/25175ed/logs/screen-n-cpu.txt.gz?level=TRACE#_2015-12-10_16_11_48_799
2015-12-10 16:11:48.798 WARNING ironicclient.common.http [req-94077ab9-5adf-4720-9fb8-2e027dde9b72 tempest-BaremetalBasicOps-1285004585 tempest-BaremetalBasicOps-1966762451] Request returned failure status.
2015-12-10 16:11:48.799 WARNING ironicclient.common.http [req-94077ab9-5adf-4720-9fb8-2e027dde9b72 tempest-BaremetalBasicOps-1285004585 tempest-BaremetalBasicOps-1966762451] Error contacting Ironic server: Node 1 is locked by host localhost, please retry after the current operation is completed.
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 150, in inner
return func(*args, **kwargs)
File "/opt/stack/new/ironic/ironic/conductor/manager.py", line 1519, in update_port
purpose='port update') as task:
File "/opt/stack/new/ironic/ironic/conductor/task_manager.py", line 152, in acquire
driver_name=driver_name, purpose=purpose)
File "/opt/stack/new/ironic/ironic/conductor/task_manager.py", line 222, in __init__
self.release_resources()
File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 204, in __exit__
six.reraise(self.type_, self.value, self.tb)
File "/opt/stack/new/ironic/ironic/conductor/task_manager.py", line 203, in __init__
self._lock()
File "/opt/stack/new/ironic/ironic/conductor/task_manager.py", line 243, in _lock
reserve_node()
File "/usr/local/lib/python2.7/dist-packages/retrying.py", line 49, in wrapped_f
return Retrying(*dargs, **dkw).call(f, *args, **kw)
File "/usr/local/lib/python2.7/dist-packages/retrying.py", line 212, in call
raise attempt.get()
File "/usr/local/lib/python2.7/dist-packages/retrying.py", line 247, in get
six.reraise(self.value[0], self.value[1], self.value[2])
File "/usr/local/lib/python2.7/dist-packages/retrying.py", line 200, in call
attempt = Attempt(fn(*args, **kwargs), attempt_number, False)
File "/opt/stack/new/ironic/ironic/conductor/task_manager.py", line 236, in reserve_node
self.node_id)
File "/opt/stack/new/ironic/ironic/objects/node.py", line 260, in reserve
db_node = cls.dbapi.reserve_node(tag, node_id)
File "/opt/stack/new/ironic/ironic/db/sqlalchemy/api.py", line 226, in reserve_node
host=node['reservation'])
NodeLocked: Node 1 is locked by host localhost, please retry after the current operation is completed.
(HTTP 409). Attempt 1 of 61
To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1525002/+subscriptions
Follow ups