yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #81028
[Bug 1856845] [NEW] Ephemeral storage removal fails with message rbd remove failed
Public bug reported:
Description
===========
After destroying instances, ephemeral storage removal intermittently fails with message:
2019-10-17 11:21:08.122 398018 INFO nova.virt.libvirt.driver [-] [instance: 87096add-348e-4c94-8f31-066346e32eef] Instance destroyed successfully.
2019-10-17 11:21:14.619 398018 WARNING nova.virt.libvirt.storage.rbd_utils [-] rbd remove 87096add-348e-4c94-8f31-066346e32eef_disk in pool rbd_pool failed
Ceph logs report lossy connection error:
2019-10-17 11:21:06.181233 7fbbdf2f4700 0 -- 10.248.83.92:6808/20526 submit_message osd_op_reply(192922 rbd_data.77c63845d27cdd.0000000000004728 [stat,set-alloc-hint object_size 4194304 write_size 4194304,write 1273856~262144] v1504399'62984460 uv62984460 ack = 0) v7 remote, 10.248.54.216:0/2391175308, failed lossy con, dropping message 0x56545f021e40
Steps to reproduce
==================
- Deploy Nova with Ceph ephemeral storage RBD
- Create an instance
- Destroy an instance
Expected result
===============
Nova instance destroyed, ceph ephemeral storage always removed from pool
Actual result
=============
Nova instance destroyed, ceph ephemeral storage sometimes remains in pool
** Affects: nova
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1856845
Title:
Ephemeral storage removal fails with message rbd remove failed
Status in OpenStack Compute (nova):
New
Bug description:
Description
===========
After destroying instances, ephemeral storage removal intermittently fails with message:
2019-10-17 11:21:08.122 398018 INFO nova.virt.libvirt.driver [-] [instance: 87096add-348e-4c94-8f31-066346e32eef] Instance destroyed successfully.
2019-10-17 11:21:14.619 398018 WARNING nova.virt.libvirt.storage.rbd_utils [-] rbd remove 87096add-348e-4c94-8f31-066346e32eef_disk in pool rbd_pool failed
Ceph logs report lossy connection error:
2019-10-17 11:21:06.181233 7fbbdf2f4700 0 -- 10.248.83.92:6808/20526 submit_message osd_op_reply(192922 rbd_data.77c63845d27cdd.0000000000004728 [stat,set-alloc-hint object_size 4194304 write_size 4194304,write 1273856~262144] v1504399'62984460 uv62984460 ack = 0) v7 remote, 10.248.54.216:0/2391175308, failed lossy con, dropping message 0x56545f021e40
Steps to reproduce
==================
- Deploy Nova with Ceph ephemeral storage RBD
- Create an instance
- Destroy an instance
Expected result
===============
Nova instance destroyed, ceph ephemeral storage always removed from pool
Actual result
=============
Nova instance destroyed, ceph ephemeral storage sometimes remains in pool
To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1856845/+subscriptions
Follow ups