← Back to team overview

yahoo-eng-team team mailing list archive

[Bug 1174480] Re: snapshotting an instance with attached volumes remembers the volumes are attached when it shouldnt.

 

This is all handled on the Compute side, there's very little that Cinder
actually knows in terms of the attach process.

** Also affects: nova
   Importance: Undecided
       Status: New

** Changed in: cinder
       Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1174480

Title:
  snapshotting an instance with attached volumes remembers the volumes
  are attached when it shouldnt.

Status in Cinder:
  Invalid
Status in OpenStack Compute (Nova):
  New

Bug description:
  this may not be a cinder thing but it was as close as I could think
  of.

  so I have an instance, it has a volume attached to it.
  I snapshot the image and terminate when snapshot is done.
  Boot the new snapshot, the problem lies in that the system still thinks
  there is a volume attached to the instance when there really isnt.
  in horizon the volume shows as available. but in the instance info page
  it lists every instance that has been ever attached to the previous instance (before snapshot'ed)
  as being still attached.
  so in trying to mount the volume again at the same /dev/vdb device for example fails as it thinks
  there is still something there.
  crank up the device to an empty on and it mounts, and it mounts at the lowest device /dev/vdb
  which it thought was used just moments before. 
  cinder show id  command shows the volume as on /dev/vdd but is really in the instance at /dev/vdb
  for example.
  this repeats for each snapshot so the "in use" device list grows.

  steve

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1174480/+subscriptions