← Back to team overview

kernel-packages team mailing list archive

[Bug 1581169] Re: kernel panic (General protection fault) on module hpsa (lockup_detected)


According to the vmcore[1], the crash seems to occur while performing
the "udevadm"[2] command which is part of the sosreport under the block

[1] - vmcore

KERNEL: /usr/lib/debug/boot/vmlinux-4.2.0-30-generic
DUMPFILE: dump.201605101421 [PARTIAL DUMP]
CPUS: 40
DATE: Wed Dec 31 19:00:00 1969
UPTIME: 12:56:48
LOAD AVERAGE: 4.66, 5.51, 4.64
TASKS: 37318
RELEASE: 4.2.0-30-generic
VERSION: #36~14.04.1-Ubuntu SMP Fri Feb 26 18:49:23 UTC 2016
MACHINE: x86_64 (2992 Mhz)
PID: 53367
COMMAND: "udevadm"
TASK: ffff880386240000 [THREAD_INFO: ffff880130fac000]
CPU: 24

[2] - sosreport - block plugin

sosreport-3.1/sos/plugins/block.py: self.add_cmd_output("udevadm info
-ap /sys/block/%s" % (disk))

You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.

  kernel panic (General protection fault) on module hpsa

Status in linux package in Ubuntu:

Bug description:
  it has been brought to my attention the following:

  Kernel version: 4.2.0-30-generic #36~14.04.1-Ubuntu

  When running an sosreport on HP DL380 gen8 machines running this
  kernel (Ubuntu 14.04.4 using linux-generic-lts-wily), which includes
  hpsa 3.4.10-0, hspa causes a kernel panic when sosreport is scanning
  block devices. These are machines with an onboard p420i and
  daughtercard p420 RAID controller, with each drive in a single raid0
  configuration. (unideal, but the machines do not boot when the card is
  in HBA mode).

  This panic does not happen on kernel 3.13 with hpsa 3.4.1-0 when using

  The funny thing is kernel 4.2 / 3.4.10-0 still is a more stable
  solution - I have yet to see a prior issue in which the p420 would
  lock up on this version. One issue wit h this is HP 99% of the time
  will require an sosreport when we raise any hardware issues. I can no
  longer produce that on kernel 4.2 machines because they kernel panic.

  I can reproduce this consistently with several other machines in our
  environment. - please let me know if you would like more info.

To manage notifications about this bug go to: