← Back to team overview

yahoo-eng-team team mailing list archive

[Bug 1931710] Re: nova-lvm lvs return -11 and fails with Failed to get udev device handler for device

 

** Also affects: nova/wallaby
   Importance: Undecided
       Status: New

** Changed in: nova/wallaby
       Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1931710

Title:
  nova-lvm lvs return -11 and fails with Failed to get udev device
  handler for device

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) wallaby series:
  Fix Released

Bug description:
  Description
  ===========

  Tests within the nova-lvm job fail during cleanup with the following
  trace visible in n-cpu:

  https://797b12f7389a12861990-09e4be48fe62aca6e4b03d954e19defe.ssl.cf5.rackcdn.com/795992/3/check/nova-
  lvm/99a7b1f/controller/logs/screen-n-cpu.txt

  Jun 11 13:04:38.733030 ubuntu-focal-inap-mtl01-0025074127 nova-compute[106254]: Command: lvs --noheadings -o lv_name /dev/stack-volumes-default
  Jun 11 13:04:38.733030 ubuntu-focal-inap-mtl01-0025074127 nova-compute[106254]: Exit code: -11
  Jun 11 13:04:38.733030 ubuntu-focal-inap-mtl01-0025074127 nova-compute[106254]: Stdout: ''
  Jun 11 13:04:38.733030 ubuntu-focal-inap-mtl01-0025074127 nova-compute[106254]: Stderr: '  WARNING: Failed to get udev device handler for device /dev/sda1.\n  /dev/sda15: stat failed: No such file or directory\n  Path /dev/sda15 no longer valid for device(8,15)\n  /dev/sda15: stat failed: No such file or directory\n  Path /dev/sda15 no longer valid for device(8,15)\n  Device open /dev/sda 8:0 failed errno 2\n  Device open /dev/sda 8:0 failed errno 2\n  Device open /dev/sda1 8:1 failed errno 2\n  Device open /dev/sda1 8:1 failed errno 2\n  WARNING: Scan ignoring device 8:0 with no paths.\n  WARNING: Scan ignoring device 8:1 with no paths.\n'

  Bug #1901783 details something simillar to this in Cinder but as the
  above is coming from native Nova ephemeral storage code with a
  different return code I'm going to treat this as a separate issue for
  now.

  
  Steps to reproduce
  ==================

  Only seen as part of the nova-lvm job at present.

  Expected result
  ===============

  nova-lvm and the removal of instances succeeds.

  Actual result
  =============

  nova-lvm and the removal of instances fails.

  Environment
  ===========
  1. Exact version of OpenStack you are running. See the following
    list for all releases: http://docs.openstack.org/releases/

  master

  2. Which hypervisor did you use?
     (For example: Libvirt + KVM, Libvirt + XEN, Hyper-V, PowerKVM, ...)
     What's the version of that?

  libvirt

  2. Which storage type did you use?
     (For example: Ceph, LVM, GPFS, ...)
     What's the version of that?

  LVM (ephemeral)

  3. Which networking type did you use?
     (For example: nova-network, Neutron with OpenVSwitch, ...)

  N/A

  Logs & Configs
  ==============

  As above.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1931710/+subscriptions



References