← Back to team overview

yahoo-eng-team team mailing list archive

[Bug 1506234] [NEW] Ironic virt driver in Nova calls destroy unnecessarily if spawn fails

 

Public bug reported:

To give some context, calling destroy [5] was added as a bug fix [1]. It
was required back then because, Nova compute was not calling destroy on
catching the exception [2]. But now, Nova compute catches all exceptions
that happen during spawn and calls destroy (_shutdown_instance) [3]

Since Nova compute is already taking care of destroying the instance
before rescheduling, we shouldn't have to call destroy separately in the
driver. I confirmed in logs that destroy gets called twice if there is
any failure during _wait_for_active() [4] or timeout happens [5]


[1] https://review.openstack.org/#/c/99519/
[2] https://github.com/openstack/nova/blob/2014.1.5/nova/compute/manager.py#L2116-L2118
[3] https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L2171-L2191
[4] https://github.com/openstack/nova/blob/master/nova/virt/ironic/driver.py#L431-L462
[5] https://github.com/openstack/nova/blob/master/nova/virt/ironic/driver.py#L823-L836

** Affects: nova
     Importance: Undecided
     Assignee: Shraddha Pandhe (shraddha-pandhe)
         Status: New


** Tags: ironic

** Project changed: nova-hyper => nova

** Changed in: nova
     Assignee: (unassigned) => Shraddha Pandhe (shraddha-pandhe)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1506234

Title:
  Ironic virt driver in Nova calls destroy unnecessarily if spawn fails

Status in OpenStack Compute (nova):
  New

Bug description:
  To give some context, calling destroy [5] was added as a bug fix [1].
  It was required back then because, Nova compute was not calling
  destroy on catching the exception [2]. But now, Nova compute catches
  all exceptions that happen during spawn and calls destroy
  (_shutdown_instance) [3]

  Since Nova compute is already taking care of destroying the instance
  before rescheduling, we shouldn't have to call destroy separately in
  the driver. I confirmed in logs that destroy gets called twice if
  there is any failure during _wait_for_active() [4] or timeout happens
  [5]


  [1] https://review.openstack.org/#/c/99519/
  [2] https://github.com/openstack/nova/blob/2014.1.5/nova/compute/manager.py#L2116-L2118
  [3] https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L2171-L2191
  [4] https://github.com/openstack/nova/blob/master/nova/virt/ironic/driver.py#L431-L462
  [5] https://github.com/openstack/nova/blob/master/nova/virt/ironic/driver.py#L823-L836

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1506234/+subscriptions


Follow ups