← Back to team overview

yahoo-eng-team team mailing list archive

[Bug 1472900] [NEW] instance boot from image(creates a new volume) deploy failed when volume rescheduling to other backends

 

Public bug reported:

    This bug happens in Icehouse and Kilo version of openstack.
    I launched instance on web by " boot from image(creates a new volume)", it failed and raise "Invalid volume", and I check cinder list, found the volume is rescheduled and created success.
    I reviewed the code of cinder volume, found that when volume create failed on one backends, the volume create workflow will revert, which will set the volume status "creating" and reschedule, and then set the volume status "error", volume rescheduled to other backends and then set status "downloading", "available".
    In the process of launching instances,  nova wait the volume status in function "_await_block_device_map_created", it returned when volume status in "available" and "error", when volume rescheduling happend, it will return with volume in "error" state, and then raise "Invalid volume" in function check_attach when volume attach.
    it suggests that when volume is rescheduling, volume status will be set to "rescheduling", without setting to "error" state in wokrflow revert, which make the volume status precise to other components.

** Affects: nova
     Importance: Undecided
         Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1472900

Title:
  instance boot from image(creates a new volume) deploy failed when
  volume rescheduling to other backends

Status in OpenStack Compute (Nova):
  New

Bug description:
      This bug happens in Icehouse and Kilo version of openstack.
      I launched instance on web by " boot from image(creates a new volume)", it failed and raise "Invalid volume", and I check cinder list, found the volume is rescheduled and created success.
      I reviewed the code of cinder volume, found that when volume create failed on one backends, the volume create workflow will revert, which will set the volume status "creating" and reschedule, and then set the volume status "error", volume rescheduled to other backends and then set status "downloading", "available".
      In the process of launching instances,  nova wait the volume status in function "_await_block_device_map_created", it returned when volume status in "available" and "error", when volume rescheduling happend, it will return with volume in "error" state, and then raise "Invalid volume" in function check_attach when volume attach.
      it suggests that when volume is rescheduling, volume status will be set to "rescheduling", without setting to "error" state in wokrflow revert, which make the volume status precise to other components.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1472900/+subscriptions


Follow ups