← Back to team overview

yahoo-eng-team team mailing list archive

[Bug 1475652] [NEW] libvirt, rbd imagebackend, disk.rescue not deleted when unrescued

 

Public bug reported:

Reproduced on juno version (actually tested on a fork from 2014.2.3,
sorry in advance if invalid but i think the legacy version is also
concerned by it)

not tested on younger versions, but looking at the code they seem
impacted too

For Rbd imagebackend only, when unrescuing an instance the disk.rescue
file is actually not deleted on remote storage (only the rbd session is
destroyed)

Consequence: when rescuing instance once again, it simply ignores the
new rescue image and takes the old _disk.rescue image

Reproduce:

1. nova rescue instance

(take care that you are booted to the vda rescue disk -> when rescuing
an instance from the same image it was spawned from (case by default),
since fs uuid is the same, according to your image fstab (if entry
UUID=) you can actually boot from the image you are actually trying to
rescue, but this is another matter that concerns template building)

edit rescue image disk

2. nova unrescue instance

3. nova rescue instance -> you get back the disk.rescue spawned in 1

if confirmed, fix coming soon

Concerning fix several possibilities:
- nova.virt.libvirt.driver :LibvirtDriver-> unrescue method, not deleting the correct files
or
- nova.virt.libvirt.imagebackend:Rbd -> erase disk.rescue in create image method if already existing

Rebuild not concerned by issue, delete instance correctly deletes files
on remote storage

** Affects: nova
     Importance: Undecided
         Status: New

** Description changed:

- Reproduced on juno version (actually tested on a fork from 2014.2.3,  sorry in advance if invalid but i think the legacy version is also concerned by it)
-  
- not tested on younger versions, but looking at the code they seem impacted too
+ Reproduced on juno version (actually tested on a fork from 2014.2.3,
+ sorry in advance if invalid but i think the legacy version is also
+ concerned by it)
+ 
+ not tested on younger versions, but looking at the code they seem
+ impacted too
  
  For Rbd imagebackend only, when unrescuing an instance the disk.rescue
  file is actually not deleted on remote storage (only the rbd session is
  destroyed)
  
  Consequence: when rescuing instance once again, it simply ignores the
  new rescue image and takes the old _disk.rescue image
  
- 
  Reproduce:
  
  1. nova rescue instance
  
- (take care that you are booted to the vda rescue disk -> when rescuing an instance from the same image it was spawned from (case by default), since fs uuid is the same according to fsstab of the template (entry UUID=) you can actually boot from the image you are actually trying to rescue, but this is another matter that concerns template building)
-  
+ (take care that you are booted to the vda rescue disk -> when rescuing
+ an instance from the same image it was spawned from (case by default),
+ since fs uuid is the same, according to your image fstab (if entry
+ UUID=) you can actually boot from the image you are actually trying to
+ rescue, but this is another matter that concerns template building)
+ 
  edit rescue image disk
  
  2. nova unrescue instance
  
  3. nova rescue instance -> you get back the disk.rescue spawned in 1
  
- 
  if confirmed, fix coming soon
  
- Concerning fix several possibilities: 
- - nova.virt.libvirt.driver :LibvirtDriver-> unrescue method, not deleting the correct files 
+ Concerning fix several possibilities:
+ - nova.virt.libvirt.driver :LibvirtDriver-> unrescue method, not deleting the correct files
  or
  - nova.virt.libvirt.imagebackend:Rbd -> erase disk.rescue in create image method if already existing
  
  Rebuild not concerned by issue, delete instance correctly deletes files
  on remote storage

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1475652

Title:
  libvirt, rbd imagebackend, disk.rescue not deleted when unrescued

Status in OpenStack Compute (nova):
  New

Bug description:
  Reproduced on juno version (actually tested on a fork from 2014.2.3,
  sorry in advance if invalid but i think the legacy version is also
  concerned by it)

  not tested on younger versions, but looking at the code they seem
  impacted too

  For Rbd imagebackend only, when unrescuing an instance the disk.rescue
  file is actually not deleted on remote storage (only the rbd session
  is destroyed)

  Consequence: when rescuing instance once again, it simply ignores the
  new rescue image and takes the old _disk.rescue image

  Reproduce:

  1. nova rescue instance

  (take care that you are booted to the vda rescue disk -> when rescuing
  an instance from the same image it was spawned from (case by default),
  since fs uuid is the same, according to your image fstab (if entry
  UUID=) you can actually boot from the image you are actually trying to
  rescue, but this is another matter that concerns template building)

  edit rescue image disk

  2. nova unrescue instance

  3. nova rescue instance -> you get back the disk.rescue spawned in 1

  if confirmed, fix coming soon

  Concerning fix several possibilities:
  - nova.virt.libvirt.driver :LibvirtDriver-> unrescue method, not deleting the correct files
  or
  - nova.virt.libvirt.imagebackend:Rbd -> erase disk.rescue in create image method if already existing

  Rebuild not concerned by issue, delete instance correctly deletes
  files on remote storage

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1475652/+subscriptions


Follow ups