yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #63029
[Bug 1680362] [NEW] soft-delete a instance, but the system-volume is still exist
Public bug reported:
Description
===========
If enable the config 'reclaim_instance_interval' and the value is greater then 1, the boot-anble volume is still exist which is block-device-mapping with a instance, when the instance is deleted, even if 'delete-on-terminate' is true in 'block-device-mapping'.
Steps to reproduce
===========
1.Config the nova, and then restart nova-api, nova-compute.
cat /etc/nova/nova.conf
... ...
# Interval in seconds for reclaiming deleted instances. It takes effect only
# when value is greater than 0. (integer value)
# Minimum value: 0
reclaim_instance_interval=600
... ...
2. Create a bootable volume, and then list volumes.
[root@localhost]# cinder list
+--------------------------------------+-----------+---------------------------------------------------+
| ID | Status | Name | Size | Volume Type | Bootable | Attached to
+--------------------------------------+-----------+------------------+------+-----------+-----------+
| 8c8a22ae-e4b2-44c7-8d7a-825b79f5eb4c | available | test_boot| 1 | - | true |
3.Boot a instance with 'block-device-mapping'.
[root@localhost]# nova boot --flavor 1 --block-device-mapping vda
=8c8a22ae-e4b2-44c7-8d7a-825b79f5eb4c:volume:1:True test_server_ebs
--nic net-id=d476830c-161c-4c69-a079-5408a21c6f90
4.Delete the instance after the instance is active.
[root@localhost]# nova delete beaa243c-6e35-4afb-9616-40e1f7e6e6e7
Expected result
===============
The volume should be deleted.
Actual result
=============
The volume is still exist, and its status is in-use
Environment
===========
[root@localhost]$ rpm -qa | grep nova
python-nova-13.1.2-1.el7.noarch
openstack-nova-common-13.1.2-1.el7.noarch
openstack-nova-novncproxy-13.1.2-1.el7.noarch
openstack-nova-api-13.1.2-1.el7.noarch
openstack-nova-compute-13.1.2-1.el7.noarch
openstack-nova-scheduler-13.1.2-1.el7.noarch
openstack-nova-conductor-13.1.2-1.el7.noarch
openstack-nova-console-13.1.2-1.el7.noarch
python2-novaclient-3.3.2-1.el7.noarch
openstack-nova-cert-13.1.2-1.el7.noarch
Configs
==============
cat /etc/nova/nova.conf
... ...
# Interval in seconds for reclaiming deleted instances. It takes effect only
# when value is greater than 0. (integer value)
# Minimum value: 0
reclaim_instance_interval=600
... ...
** Affects: nova
Importance: Undecided
Assignee: huangsm (huangsm)
Status: New
** Description changed:
Description
===========
- If enable the config 'reclaim_instance_interval' and the value is greater then 1, the boot-anble volume is still exist which is block-device-mapping with a instance, when the instance is deleted, even if 'delete-on-terminate' is true in 'block-device-mapping'.
+ If enable the config 'reclaim_instance_interval' and the value is greater then 1, the boot-anble volume is still exist which is block-device-mapping with a instance, when the instance is deleted, even if 'delete-on-terminate' is true in 'block-device-mapping'.
Steps to reproduce
===========
1.Config the nova, and then restart nova-api, nova-compute.
cat /etc/nova/nova.conf
... ...
# Interval in seconds for reclaiming deleted instances. It takes effect only
# when value is greater than 0. (integer value)
# Minimum value: 0
reclaim_instance_interval=600
... ...
2. Create a bootable volume, and then list volumes.
[root@localhost]# cinder list
+--------------------------------------+-----------+---------------------------------------------------+
- | ID | Status | Name | Size | Volume Type | Bootable | Attached to
+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to
+--------------------------------------+-----------+------------------+------+-----------+-----------+
- | 8c8a22ae-e4b2-44c7-8d7a-825b79f5eb4c | available | test_boot| 1 | - | true |
+ | 8c8a22ae-e4b2-44c7-8d7a-825b79f5eb4c | available | test_boot| 1 | - | true |
3.Boot a instance with 'block-device-mapping'.
[root@localhost]# nova boot --flavor 1 --block-device-mapping vda
=8c8a22ae-e4b2-44c7-8d7a-825b79f5eb4c:volume:1:True test_server_ebs
--nic net-id=d476830c-161c-4c69-a079-5408a21c6f90
- 4.Delete the instance after the instance is active.
- [root@localhost]# nova delete beaa243c-6e35-4afb-9616-40e1f7e6e6e7
+ 4.Delete the instance after the instance is active.
+ [root@localhost]# nova delete beaa243c-6e35-4afb-9616-40e1f7e6e6e7
Expected result
===============
The volume should be deleted.
Actual result
=============
The volume is still exist, and its status is in-use
Environment
===========
[root@localhost]$ rpm -qa | grep nova
python-nova-13.1.2-1.el7.noarch
openstack-nova-common-13.1.2-1.el7.noarch
openstack-nova-novncproxy-13.1.2-1.el7.noarch
openstack-nova-api-13.1.2-1.el7.noarch
openstack-nova-compute-13.1.2-1.el7.noarch
openstack-nova-scheduler-13.1.2-1.el7.noarch
openstack-nova-conductor-13.1.2-1.el7.noarch
openstack-nova-console-13.1.2-1.el7.noarch
python2-novaclient-3.3.2-1.el7.noarch
openstack-nova-cert-13.1.2-1.el7.noarch
+
+ Configs
+ ==============
+ cat /etc/nova/nova.conf
+ ... ...
+ # Interval in seconds for reclaiming deleted instances. It takes effect only
+ # when value is greater than 0. (integer value)
+ # Minimum value: 0
+ reclaim_instance_interval=600
+ ... ...
** Changed in: nova
Assignee: (unassigned) => huangsm (huangsm)
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1680362
Title:
soft-delete a instance, but the system-volume is still exist
Status in OpenStack Compute (nova):
New
Bug description:
Description
===========
If enable the config 'reclaim_instance_interval' and the value is greater then 1, the boot-anble volume is still exist which is block-device-mapping with a instance, when the instance is deleted, even if 'delete-on-terminate' is true in 'block-device-mapping'.
Steps to reproduce
===========
1.Config the nova, and then restart nova-api, nova-compute.
cat /etc/nova/nova.conf
... ...
# Interval in seconds for reclaiming deleted instances. It takes effect only
# when value is greater than 0. (integer value)
# Minimum value: 0
reclaim_instance_interval=600
... ...
2. Create a bootable volume, and then list volumes.
[root@localhost]# cinder list
+--------------------------------------+-----------+---------------------------------------------------+
| ID | Status | Name | Size | Volume Type | Bootable | Attached to
+--------------------------------------+-----------+------------------+------+-----------+-----------+
| 8c8a22ae-e4b2-44c7-8d7a-825b79f5eb4c | available | test_boot| 1 | - | true |
3.Boot a instance with 'block-device-mapping'.
[root@localhost]# nova boot --flavor 1 --block-device-mapping vda
=8c8a22ae-e4b2-44c7-8d7a-825b79f5eb4c:volume:1:True test_server_ebs
--nic net-id=d476830c-161c-4c69-a079-5408a21c6f90
4.Delete the instance after the instance is active.
[root@localhost]# nova delete beaa243c-6e35-4afb-9616-40e1f7e6e6e7
Expected result
===============
The volume should be deleted.
Actual result
=============
The volume is still exist, and its status is in-use
Environment
===========
[root@localhost]$ rpm -qa | grep nova
python-nova-13.1.2-1.el7.noarch
openstack-nova-common-13.1.2-1.el7.noarch
openstack-nova-novncproxy-13.1.2-1.el7.noarch
openstack-nova-api-13.1.2-1.el7.noarch
openstack-nova-compute-13.1.2-1.el7.noarch
openstack-nova-scheduler-13.1.2-1.el7.noarch
openstack-nova-conductor-13.1.2-1.el7.noarch
openstack-nova-console-13.1.2-1.el7.noarch
python2-novaclient-3.3.2-1.el7.noarch
openstack-nova-cert-13.1.2-1.el7.noarch
Configs
==============
cat /etc/nova/nova.conf
... ...
# Interval in seconds for reclaiming deleted instances. It takes effect only
# when value is greater than 0. (integer value)
# Minimum value: 0
reclaim_instance_interval=600
... ...
To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1680362/+subscriptions
Follow ups