← Back to team overview

yahoo-eng-team team mailing list archive

[Bug 1208799] Re: after detach vol from VM the multipath device is in failed state

 

** Also affects: os-brick
   Importance: Undecided
       Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1208799

Title:
  after detach vol from VM the multipath device is in failed state

Status in OpenStack Compute (nova):
  Confirmed
Status in os-brick:
  New

Bug description:
  created cinder volume (fibre channel connectivity) then I attach it to
  a nova VM.

  Here is the multipath -l output  (all the path are in active state which is correct):
  -------------------------------------------------------------------------------------
  mpath380 (200173800fe0226d5) dm-0 IBM,2810XIV
  size=964M features='1 queue_if_no_path' hwhandler='0' wp=rw
  `-+- policy='round-robin 0' prio=-1 status=active
    |- 2:0:16:100 sde  8:64    active undef running
    |- 2:0:13:100 sdb  8:16    active undef running
    |- 2:0:15:100 sdd  8:48    active undef running
    |- 3:0:4:100  sdi  8:128   active undef running
    |- 3:0:1:100  sdf  8:80    active undef running
    |- 3:0:2:100  sdg  8:96    active undef running
    `- 3:0:3:100  sdh  8:112   active undef running

  But when I detach the volume from the nova VM, the multipath -l output  (all the paths are in failed status) :
  -------------------------------------------------------------------------------------
  mpath380 (200173800fe0226d5) dm-0 IBM,2810XIV
  size=964M features='0' hwhandler='0' wp=rw
  `-+- policy='round-robin 0' prio=-1 status=enabled
    |- 2:0:16:100 sde  8:64    failed undef running
    |- 2:0:15:100 sdd  8:48    failed undef running
    |- 3:0:4:100  sdi  8:128   failed undef running
    |- 3:0:1:100  sdf  8:80    failed undef running
    |- 3:0:2:100  sdg  8:96    failed undef running
    `- 3:0:3:100  sdh  8:112   failed undef running

  Note : Those failed paths in the multipath, caused a lot of bad paths
  messages in the /var/log/syslog.

  
  Basic information on the environment :
  ---------------------------------------------------------------------
  - nova verions is Grizzly version 1:2013.1.1-0ubuntu2~cloud0, on ubuntu.
  - installed on the same host the cinder and the compute.
  - multipath-tools package version is 0.4.9-3ubuntu5

  How do I recover thoses failed path :
  --------------------------------------------------------------
  1. per bad device path in multipath -l, I executed :
       # echo 1 > /sys/block/${device_path}/device/delete

  2. per bad multipath device I executed :
      dmsetup message ${multipath_device} 0 "fail_if_no_path"

  3. the refreshed the multipath by executed :
      # multipath -F

  then the multipath device and its device path (which was in failed
  state) gone!

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1208799/+subscriptions