← Back to team overview

yahoo-eng-team team mailing list archive

[Bug 1373852] [NEW] unable to boot nova instance from boot volume id

 

Public bug reported:

Test steps : 
1) create a volume from image
2) now boot instance with above volume


ssatya@juno:~/juno/devstack$ nova image-list
+--------------------------------------+------------------------+--------+--------+
| ID                                   | Name                   | Status | Server |
+--------------------------------------+------------------------+--------+--------+
| b99f9093-cc69-4a2a-a130-49005c31fd1f | cirros-0.3.2-i386-disk | ACTIVE |        |
+--------------------------------------+------------------------+--------+--------+
ssatya@juno:~/juno/devstack$ cinder create --image-id b99f9093-cc69-4a2a-a130-49005c31fd1f 1 


ssatya@juno:~/juno/devstack$ cinder list
+--------------------------------------+-----------+------+------+-------------+----------+-------------+
|                  ID                  |   Status  | Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+------+------+-------------+----------+-------------+
| 664c8014-9863-488c-9a9f-9f60f19ac609 | available | None |  1   |     None    |   true   |             |
+--------------------------------------+-----------+------+------+-------------+----------+-------------+
ssatya@juno:~/juno/devstack$ nova boot  --boot-volume 664c8014-9863-488c-9a9f-9f60f19ac609  testboot3 --flavor 1
ssatya@juno:~/juno/devstack$ nova list
+--------------------------------------+-----------+--------+------------+-------------+------------------+
| ID                                   | Name      | Status | Task State | Power State | Networks         |
+--------------------------------------+-----------+--------+------------+-------------+------------------+
| d9f121e0-f76d-4a94-83e9-d9ccf7168c9b | testboot3 | ERROR  | -          | NOSTATE     | private=10.0.0.4 |
+--------------------------------------+-----------+--------+------------+-------------+------------------+


014-09-25 15:05:53.073 DEBUG nova.openstack.common.lockutils [-] Got semaphore / lock "update_usage" from (pid=12624) inner /opt/stack/nova/nova/openstack/common/lockutils
.py:271
2014-09-25 15:05:53.126 INFO nova.scheduler.client.report [-] Compute_service record updated for ('juno', domain-c7(cls))
2014-09-25 15:05:53.126 DEBUG nova.openstack.common.lockutils [-] Releasing semaphore "compute_resources" from (pid=12624) lock /opt/stack/nova/nova/openstack/common/lockut
ils.py:238
2014-09-25 15:05:53.126 DEBUG nova.openstack.common.lockutils [-] Semaphore / lock released "update_usage" from (pid=12624) inner /opt/stack/nova/nova/openstack/common/lock
utils.py:275
2014-09-25 15:05:56.197 DEBUG nova.openstack.common.periodic_task [-] Running periodic task ComputeManager._poll_volume_usage from (pid=12624) run_periodic_tasks /opt/stack
/nova/nova/openstack/common/periodic_task.py:193
2014-09-25 15:05:56.198 DEBUG nova.openstack.common.loopingcall [-] Dynamic looping call <bound method Service.periodic_tasks of <nova.service.Service object at 0x2aa4e90>>
 sleeping for 1.97 seconds from (pid=12624) _inner /opt/stack/nova/nova/openstack/common/loopingcall.py:132
2014-09-25 15:05:56.294 DEBUG nova.volume.cinder [req-9a41800b-6145-4b51-83a5-e097fd122b38 admin demo] Cinderclient connection created using URL: http://10.112.185.114:8776
/v1/30ea152a107248bba878a1c8d31467b6 from (pid=12624) get_cinder_client_version /opt/stack/nova/nova/volume/cinder.py:255
2014-09-25 15:05:56.955 ERROR nova.compute.manager [req-9a41800b-6145-4b51-83a5-e097fd122b38 admin demo] [instance: d9f121e0-f76d-4a94-83e9-d9ccf7168c9b] Instance failed to
 spawn
2014-09-25 15:05:56.955 TRACE nova.compute.manager [instance: d9f121e0-f76d-4a94-83e9-d9ccf7168c9b] Traceback (most recent call last):
2014-09-25 15:05:56.955 TRACE nova.compute.manager [instance: d9f121e0-f76d-4a94-83e9-d9ccf7168c9b]   File "/opt/stack/nova/nova/compute/manager.py", line 2222, in _build_r
esources
2014-09-25 15:05:56.955 TRACE nova.compute.manager [instance: d9f121e0-f76d-4a94-83e9-d9ccf7168c9b]     yield resources
2014-09-25 15:05:56.955 TRACE nova.compute.manager [instance: d9f121e0-f76d-4a94-83e9-d9ccf7168c9b]   File "/opt/stack/nova/nova/compute/manager.py", line 2101, in _build_a
nd_run_instance
2014-09-25 15:05:56.955 TRACE nova.compute.manager [instance: d9f121e0-f76d-4a94-83e9-d9ccf7168c9b]     block_device_info=block_device_info)
2014-09-25 15:05:56.955 TRACE nova.compute.manager [instance: d9f121e0-f76d-4a94-83e9-d9ccf7168c9b]   File "/opt/stack/nova/nova/virt/vmwareapi/driver.py", line 447, in spa
wn
2014-09-25 15:05:56.955 TRACE nova.compute.manager [instance: d9f121e0-f76d-4a94-83e9-d9ccf7168c9b]     admin_password, network_info, block_device_info)
2014-09-25 15:05:56.955 TRACE nova.compute.manager [instance: d9f121e0-f76d-4a94-83e9-d9ccf7168c9b]   File "/opt/stack/nova/nova/virt/vmwareapi/vmops.py", line 437, in spaw
n
2014-09-25 15:05:56.955 TRACE nova.compute.manager [instance: d9f121e0-f76d-4a94-83e9-d9ccf7168c9b]     vi = self._get_vm_config_info(instance, image_info, instance_name)
2014-09-25 15:05:56.955 TRACE nova.compute.manager [instance: d9f121e0-f76d-4a94-83e9-d9ccf7168c9b]   File "/opt/stack/nova/nova/virt/vmwareapi/vmops.py", line 375, in _get
_vm_config_info
2014-09-25 15:05:56.955 TRACE nova.compute.manager [instance: d9f121e0-f76d-4a94-83e9-d9ccf7168c9b]     image_info.file_size_in_gb > instance.root_gb):
2014-09-25 15:05:56.955 TRACE nova.compute.manager [instance: d9f121e0-f76d-4a94-83e9-d9ccf7168c9b]   File "/opt/stack/nova/nova/virt/vmwareapi/vmware_images.py", line 92,
in file_size_in_gb
2014-09-25 15:05:56.955 TRACE nova.compute.manager [instance: d9f121e0-f76d-4a94-83e9-d9ccf7168c9b]     return self.file_size / units.Gi
2014-09-25 15:05:56.955 TRACE nova.compute.manager [instance: d9f121e0-f76d-4a94-83e9-d9ccf7168c9b] TypeError: unsupported operand type(s) for /: 'unicode' and 'int'
2014-09-25 15:05:56.955 TRACE nova.compute.manager [instance: d9f121e0-f76d-4a94-83e9-d9ccf7168c9b]
2014-09-25 15:05:56.955 AUDIT nova.compute.manager [req-9a41800b-6145-4b51-83a5-e097fd122b38 admin demo] [instance: d9f121e0-f76d-4a94-83e9-d9ccf7168c9b] Terminating instan
ce
2014-09-25 15:05:56.957 DEBUG nova.virt.vmwareapi.vmops [req-9a41800b-6145-4b51-83a5-e097fd122b38 admin demo] [instance: d9f121e0-f76d-4a94-83e9-d9ccf7168c9b] Destroying in
stance from (pid=12624) destroy /opt/stack/nova/nova/virt/vmwareapi/vmops.py:831
2014-09-25 15:05:56.957 DEBUG oslo.vmware.api [req-9a41800b-6145-4b51-83a5-e097fd122b38 admin demo] Waiting for function _invoke_api to return. from (pid=12624) func /usr/l
ocal/lib/python2.7/dist-packages/oslo/vmware/api.py:126
2014-09-25 15:05:57.191 DEBUG oslo.vmware.api [req-9a41800b-6145-4b51-83a5-e097fd122b38 admin demo] Waiting for function _invoke_api to return. from (pid=12624) func /usr/l
ocal/lib/python2.7/dist-packages/oslo/vmware/api.py:126
2014-09-25 15:05:57.465 WARNING nova.virt.vmwareapi.vmops [req-9a41800b-6145-4b51-83a5-e097fd122b38 admin demo] [instance: d9f121e0-f76d-4a94-83e9-d9ccf7168c9b] Instance do
es not exist on backend
2014-09-25 15:05:57.466 DEBUG nova.virt.vmwareapi.vmops [req-9a41800b-6145-4b51-83a5-e097fd122b38 admin demo] [instance: d9f121e0-f76d-4a94-83e9-d9ccf7168c9b] Instance dest
royed from (pid=12624) destroy /opt/stack/nova/nova/virt/vmwareapi/vmops.py:843
2014-09-25 15:05:57.466 DEBUG oslo.vmware.api [req-9a41800b-6145-4b51-83a5-e097fd122b38 admin demo] Waiting for function _invoke_api to return. from (pid=12624) func /usr/local/lib/python2.7/dist-packages/oslo/vmware/api.py:126

** Affects: nova
     Importance: Undecided
         Status: New


** Tags: vmware

** Attachment added: "nova_boot volume.zip"
   https://bugs.launchpad.net/bugs/1373852/+attachment/4214685/+files/nova_boot%20volume.zip

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1373852

Title:
  unable to boot nova instance from boot volume id

Status in OpenStack Compute (Nova):
  New

Bug description:
  Test steps : 
  1) create a volume from image
  2) now boot instance with above volume


  ssatya@juno:~/juno/devstack$ nova image-list
  +--------------------------------------+------------------------+--------+--------+
  | ID                                   | Name                   | Status | Server |
  +--------------------------------------+------------------------+--------+--------+
  | b99f9093-cc69-4a2a-a130-49005c31fd1f | cirros-0.3.2-i386-disk | ACTIVE |        |
  +--------------------------------------+------------------------+--------+--------+
  ssatya@juno:~/juno/devstack$ cinder create --image-id b99f9093-cc69-4a2a-a130-49005c31fd1f 1 

  
  ssatya@juno:~/juno/devstack$ cinder list
  +--------------------------------------+-----------+------+------+-------------+----------+-------------+
  |                  ID                  |   Status  | Name | Size | Volume Type | Bootable | Attached to |
  +--------------------------------------+-----------+------+------+-------------+----------+-------------+
  | 664c8014-9863-488c-9a9f-9f60f19ac609 | available | None |  1   |     None    |   true   |             |
  +--------------------------------------+-----------+------+------+-------------+----------+-------------+
  ssatya@juno:~/juno/devstack$ nova boot  --boot-volume 664c8014-9863-488c-9a9f-9f60f19ac609  testboot3 --flavor 1
  ssatya@juno:~/juno/devstack$ nova list
  +--------------------------------------+-----------+--------+------------+-------------+------------------+
  | ID                                   | Name      | Status | Task State | Power State | Networks         |
  +--------------------------------------+-----------+--------+------------+-------------+------------------+
  | d9f121e0-f76d-4a94-83e9-d9ccf7168c9b | testboot3 | ERROR  | -          | NOSTATE     | private=10.0.0.4 |
  +--------------------------------------+-----------+--------+------------+-------------+------------------+


  014-09-25 15:05:53.073 DEBUG nova.openstack.common.lockutils [-] Got semaphore / lock "update_usage" from (pid=12624) inner /opt/stack/nova/nova/openstack/common/lockutils
  .py:271
  2014-09-25 15:05:53.126 INFO nova.scheduler.client.report [-] Compute_service record updated for ('juno', domain-c7(cls))
  2014-09-25 15:05:53.126 DEBUG nova.openstack.common.lockutils [-] Releasing semaphore "compute_resources" from (pid=12624) lock /opt/stack/nova/nova/openstack/common/lockut
  ils.py:238
  2014-09-25 15:05:53.126 DEBUG nova.openstack.common.lockutils [-] Semaphore / lock released "update_usage" from (pid=12624) inner /opt/stack/nova/nova/openstack/common/lock
  utils.py:275
  2014-09-25 15:05:56.197 DEBUG nova.openstack.common.periodic_task [-] Running periodic task ComputeManager._poll_volume_usage from (pid=12624) run_periodic_tasks /opt/stack
  /nova/nova/openstack/common/periodic_task.py:193
  2014-09-25 15:05:56.198 DEBUG nova.openstack.common.loopingcall [-] Dynamic looping call <bound method Service.periodic_tasks of <nova.service.Service object at 0x2aa4e90>>
   sleeping for 1.97 seconds from (pid=12624) _inner /opt/stack/nova/nova/openstack/common/loopingcall.py:132
  2014-09-25 15:05:56.294 DEBUG nova.volume.cinder [req-9a41800b-6145-4b51-83a5-e097fd122b38 admin demo] Cinderclient connection created using URL: http://10.112.185.114:8776
  /v1/30ea152a107248bba878a1c8d31467b6 from (pid=12624) get_cinder_client_version /opt/stack/nova/nova/volume/cinder.py:255
  2014-09-25 15:05:56.955 ERROR nova.compute.manager [req-9a41800b-6145-4b51-83a5-e097fd122b38 admin demo] [instance: d9f121e0-f76d-4a94-83e9-d9ccf7168c9b] Instance failed to
   spawn
  2014-09-25 15:05:56.955 TRACE nova.compute.manager [instance: d9f121e0-f76d-4a94-83e9-d9ccf7168c9b] Traceback (most recent call last):
  2014-09-25 15:05:56.955 TRACE nova.compute.manager [instance: d9f121e0-f76d-4a94-83e9-d9ccf7168c9b]   File "/opt/stack/nova/nova/compute/manager.py", line 2222, in _build_r
  esources
  2014-09-25 15:05:56.955 TRACE nova.compute.manager [instance: d9f121e0-f76d-4a94-83e9-d9ccf7168c9b]     yield resources
  2014-09-25 15:05:56.955 TRACE nova.compute.manager [instance: d9f121e0-f76d-4a94-83e9-d9ccf7168c9b]   File "/opt/stack/nova/nova/compute/manager.py", line 2101, in _build_a
  nd_run_instance
  2014-09-25 15:05:56.955 TRACE nova.compute.manager [instance: d9f121e0-f76d-4a94-83e9-d9ccf7168c9b]     block_device_info=block_device_info)
  2014-09-25 15:05:56.955 TRACE nova.compute.manager [instance: d9f121e0-f76d-4a94-83e9-d9ccf7168c9b]   File "/opt/stack/nova/nova/virt/vmwareapi/driver.py", line 447, in spa
  wn
  2014-09-25 15:05:56.955 TRACE nova.compute.manager [instance: d9f121e0-f76d-4a94-83e9-d9ccf7168c9b]     admin_password, network_info, block_device_info)
  2014-09-25 15:05:56.955 TRACE nova.compute.manager [instance: d9f121e0-f76d-4a94-83e9-d9ccf7168c9b]   File "/opt/stack/nova/nova/virt/vmwareapi/vmops.py", line 437, in spaw
  n
  2014-09-25 15:05:56.955 TRACE nova.compute.manager [instance: d9f121e0-f76d-4a94-83e9-d9ccf7168c9b]     vi = self._get_vm_config_info(instance, image_info, instance_name)
  2014-09-25 15:05:56.955 TRACE nova.compute.manager [instance: d9f121e0-f76d-4a94-83e9-d9ccf7168c9b]   File "/opt/stack/nova/nova/virt/vmwareapi/vmops.py", line 375, in _get
  _vm_config_info
  2014-09-25 15:05:56.955 TRACE nova.compute.manager [instance: d9f121e0-f76d-4a94-83e9-d9ccf7168c9b]     image_info.file_size_in_gb > instance.root_gb):
  2014-09-25 15:05:56.955 TRACE nova.compute.manager [instance: d9f121e0-f76d-4a94-83e9-d9ccf7168c9b]   File "/opt/stack/nova/nova/virt/vmwareapi/vmware_images.py", line 92,
  in file_size_in_gb
  2014-09-25 15:05:56.955 TRACE nova.compute.manager [instance: d9f121e0-f76d-4a94-83e9-d9ccf7168c9b]     return self.file_size / units.Gi
  2014-09-25 15:05:56.955 TRACE nova.compute.manager [instance: d9f121e0-f76d-4a94-83e9-d9ccf7168c9b] TypeError: unsupported operand type(s) for /: 'unicode' and 'int'
  2014-09-25 15:05:56.955 TRACE nova.compute.manager [instance: d9f121e0-f76d-4a94-83e9-d9ccf7168c9b]
  2014-09-25 15:05:56.955 AUDIT nova.compute.manager [req-9a41800b-6145-4b51-83a5-e097fd122b38 admin demo] [instance: d9f121e0-f76d-4a94-83e9-d9ccf7168c9b] Terminating instan
  ce
  2014-09-25 15:05:56.957 DEBUG nova.virt.vmwareapi.vmops [req-9a41800b-6145-4b51-83a5-e097fd122b38 admin demo] [instance: d9f121e0-f76d-4a94-83e9-d9ccf7168c9b] Destroying in
  stance from (pid=12624) destroy /opt/stack/nova/nova/virt/vmwareapi/vmops.py:831
  2014-09-25 15:05:56.957 DEBUG oslo.vmware.api [req-9a41800b-6145-4b51-83a5-e097fd122b38 admin demo] Waiting for function _invoke_api to return. from (pid=12624) func /usr/l
  ocal/lib/python2.7/dist-packages/oslo/vmware/api.py:126
  2014-09-25 15:05:57.191 DEBUG oslo.vmware.api [req-9a41800b-6145-4b51-83a5-e097fd122b38 admin demo] Waiting for function _invoke_api to return. from (pid=12624) func /usr/l
  ocal/lib/python2.7/dist-packages/oslo/vmware/api.py:126
  2014-09-25 15:05:57.465 WARNING nova.virt.vmwareapi.vmops [req-9a41800b-6145-4b51-83a5-e097fd122b38 admin demo] [instance: d9f121e0-f76d-4a94-83e9-d9ccf7168c9b] Instance do
  es not exist on backend
  2014-09-25 15:05:57.466 DEBUG nova.virt.vmwareapi.vmops [req-9a41800b-6145-4b51-83a5-e097fd122b38 admin demo] [instance: d9f121e0-f76d-4a94-83e9-d9ccf7168c9b] Instance dest
  royed from (pid=12624) destroy /opt/stack/nova/nova/virt/vmwareapi/vmops.py:843
  2014-09-25 15:05:57.466 DEBUG oslo.vmware.api [req-9a41800b-6145-4b51-83a5-e097fd122b38 admin demo] Waiting for function _invoke_api to return. from (pid=12624) func /usr/local/lib/python2.7/dist-packages/oslo/vmware/api.py:126

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1373852/+subscriptions


Follow ups

References