touch-packages team mailing list archive
-
touch-packages team
-
Mailing list archive
-
Message #33462
[Bug 1351528] Re: boot fails for LVM on degraded raid due to missing device tables at premount
See https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/1077650
--
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to lvm2 in Ubuntu.
https://bugs.launchpad.net/bugs/1351528
Title:
boot fails for LVM on degraded raid due to missing device tables at
premount
Status in “lvm2” package in Ubuntu:
Confirmed
Bug description:
This is a Trusty installation with combined root + /boot within LVM on top of mdraid (type 1.x) RAID1. Raid1 was built with one missing disk (degraded).
[method: basically create raid/VG/LV setup in shell first then point installer at the lvm. At the end of the install create a chroot, add the mdadm pkg, and update-initramfs before reboot.]
The boot process fails with the following messages:
Incrementally starting RAID arrays...
mdadm: CREATE user root not found
mdadm: CREATE group disk not found
Incrementally starting RAID arrays...
and slowly repeats the above at this point.
workaround:
- add break=premount to grub kernel line entry
- for continued visibility of text boot output also remove quiet, splash and possibly set gxmode 640x480
now @ initramfs prompt:
mdadm --detail /dev/md0 should indicate a state of clean, degraded, array is started so this part is ok
lvm lvs output attributes are as follows:
-wi-d---- (instead of the expected -wi-a----)
lvs manpage this means device tables are missing (device mapper?)
FIX: simply run lvm vgchange -ay and exit initramsfs. This will lead
to a booting system.
To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/lvm2/+bug/1351528/+subscriptions
References