← Back to team overview

kernel-packages team mailing list archive

[Bug 1396213] Re: LVM VG is not activated during system boot

 

I don't get this bug.

I have at least 1 snapshot going on my "/home" partition all the time.

The VG that /home is in contains most of my partitions (26), with
2 more partitions on a separate (VG+PD's) VG.

Now, I've noticed when I am booting, it *does* take a bit of time to mount
bring up and mount all of the lvs, but you can the root mount is NOT
in an VG/LV -- It's on a "regular device" (numbers on left are w/kernel time
printing turned on -- so they are in seconds after boot):

[    4.207621] XFS (sdc1): Mounting V4 Filesystem
[    4.278746] XFS (sdc1): Starting recovery (logdev: internal)
[    4.370757] XFS (sdc1): Ending recovery (logdev: internal)
[    4.379839] VFS: Mounted root (xfs filesystem) on device 8:33.
..
[    4.449462] devtmpfs: mounted
... last msg before my "long pause" where pretty much everything
get activated:
[    4.591580] input: Dell Dell USB Keyboard as /devices/pci0000:00/0000:00:1a.7/usb1/1-3/1-3.2/1-3.2:1.0/0003:413C:2003.0002/input/input4
[    4.604588] hid-generic 0003:413C:2003.0002: input,hidraw1: USB HID v1.10 Keyboard [Dell Dell USB Keyboard] on usb-0000:00:1a.7-3.2/input0
[   19.331731] showconsole (170) used greatest stack depth: 13080 bytes left
[   19.412411] XFS (sdc6): Mounting V4 Filesystem
[   19.505374] XFS (sdc6): Ending clean mount
.... more mostly unrelated messages... then you start seeing "dm's" mixed in
with the mounting messages -- just before kernel logging stops:

[   22.205351] XFS (sdc2): Mounting V4 Filesystem
[   22.205557] XFS (sdc3): Mounting V4 Filesystem
[   22.216414] XFS (dm-5): Mounting V4 Filesystem
[   22.217893] XFS (dm-6): Mounting V4 Filesystem
[   22.237345] XFS (dm-1): Mounting V4 Filesystem
[   22.245201] XFS (dm-8): Mounting V4 Filesystem
[   22.267971] XFS (dm-13): Mounting V4 Filesystem
[   22.293152] XFS (dm-15): Mounting V4 Filesystem
[   22.299737] XFS (sdc8): Mounting V4 Filesystem
[   22.340692] XFS (sdc2): Ending clean mount
[   22.373169] XFS (sdc3): Ending clean mount
[   22.401381] XFS (dm-5): Ending clean mount
[   22.463974] XFS (dm-13): Ending clean mount
[   22.474813] XFS (dm-1): Ending clean mount
[   22.494807] XFS (dm-8): Ending clean mount
[   22.505380] XFS (sdc8): Ending clean mount
[   22.544059] XFS (dm-15): Ending clean mount
[   22.557865] XFS (dm-6): Ending clean mount
[   22.836244] Adding 8393924k swap on /dev/sdc5.  Priority:-1 extents:1 across:8393924k FS
Kernel logging (ksyslog) stopped.
Kernel log daemon terminating.
-----
A couple of things different about my setup from the 'norm' -- 
1) since my distro(openSuSE) jumped to systemd, (and I haven't), I had to write some 
rc scripts to help bring up the system.
2) one reason for this was my "/usr" partition is separate from root and
my distro decided to move many libs/bins ->usr and leave symlinks on the
root device to the programs in /usr.  One of those was 'mount' (and its associated libs).

That meant that once the rootfs was booted I had no way to mount /usr, where most
of the binaries are (I asked why they didn't do it the "safe way" and move most
of the binaries to /bin & /lib64 and put symlinks in /usr  but they evaded answering that question for ~2 years .  So one script I run after updating my system is a
dependency checker that checks mount orders and tries to make sure that early mounted disks don't have dependencies on later mounted disks.

3) adding to my problem was that I don't use an initrd to boot.  I boot
from my hard disk.  My distro folks thought they had solved the problem
by hiding the mount of /usr in the initrd, so when they start systemd to
control the boot, it is happy.  But if you boot from HD, I was told my
~15 year old configuration was  no longer supported.  Bleh!,

One thing that might account for speed diffs, is that I don't wait for
udev to start my VG's, ... and here is where I think I see my ~15 second
pause:

 if test -d /etc/lvm -a -x /sbin/vgscan -a -x /sbin/vgchange ; then
      # Waiting for udev to settle
      if [ "$LVM_DEVICE_TIMEOUT" -gt 0 ] ; then
    echo "Waiting for udev to settle..."
    /sbin/udevadm settle --timeout=$LVM_DEVICE_TIMEOUT
      fi
      echo "Scanning for LVM volume groups..."
      /sbin/vgscan --mknodes
      echo "Activating LVM volume groups..."
      /sbin/vgchange -a y $LVM_VGS_ACTIVATED_ON_BOOT
      mount -c -a -F
...
So at the point where I have a pause, I'm doing vgscan and vgchange, then
a first shot at mounting "all" (it was the easiest thing to fix/change).

Without that mount all attempt in my 4th "boot script" to execute -- boot.lvm, 
I often had long timeouts in the boot process.   But as you can see, I
tell mount to go "fork(-F)" and try to mount all FS's at the same time.  I'm
pretty sure that's where the pause is given that right after the pause,
XFS starts issuing messages about "DM's" being mounted.

Somewhere around script #8 is my distro's "localfs" mounts -- but for me,
that was way too late, since many of the boot utils not only used
/usr, but /usr/share (another partition after /usr/share grew too big  -- and 
it *is* on a VG.

In summary -- I had very long waits (minutes) using the distro boot
scripts (people report longer wait times using the systemd startup
method where they are also using non-ssd disks).  But after I made my
own changes -- out of necessity, I had nominal (no error/timeout probs)
boot times drop from around 35 down to around 25 seconds (it's a server
-- so lots of server processes, but no desktop).

So, it seems like the dm functions aren't being announced to udev properly (maybe?), 
but the early 'mount "all, in parallel, in background" early in my boot process seems to have solved most of the time delay probs.

I *do* get duplicate mount messages later on when distro scripts try to mount everything, but they seem to be harmless at this point:
tmpfs                    : already mounted
tmpfs                    : already mounted
/run                     : successfully mounted
/dev/sdc6                : already mounted
/dev/Data/UsrShare       : already mounted
/dev/sdc2                : already mounted
/var/rtmp                : successfully mounted
/dev/Data/Home.diff      : already mounted
/dev/sdc3                : already mounted
/dev/Data/Home           : already mounted
/dev/Data/Share          : already mounted
/dev/Data/Media          : already mounted
/dev/sdc8                : already mounted
/dev/Data/cptest         : already mounted
---
As far as lv's, most are on the same 'VG': Data:
Home                     Data   owc-aos--    1.50t
  Home-2015.02.17-03.07.03 Data   -wi-ao---  796.00m
  Home-2015.03.01-03.07.03 Data   -wi-ao---  884.00m
  Home-2015.03.03-03.07.03 Data   -wi-ao---  812.00m
  Home-2015.03.05-03.07.03 Data   -wi-ao---  868.00m
  Home-2015.03.07-03.07.02 Data   -wi-ao---  740.00m
  Home-2015.03.09-03.07.02 Data   -wi-ao---  856.00m
  Home-2015.03.11-03.07.03 Data   -wi-ao---    1.14g
  Home-2015.03.12-03.07.02 Data   -wi-ao---  868.00m
  Home-2015.03.13-03.07.06 Data   -wi-ao--- 1000.00m
  Home-2015.03.16-03.07.12 Data   -wi-ao---  840.00m
  Home-2015.03.17-03.07.03 Data   -wi-ao---  888.00m
  Home-2015.03.19-03.07.03 Data   swi-aos--    1.50t      Home     0.65
  Home.diff                Data   -wi-ao---  512.00g
  Lmedia                   Data   -wi-ao---    8.00t
  Local                    Data   -wi-ao---    1.50t
  Media                    Data   -wc-ao---   10.00t
  Share                    Data   -wc-ao---    1.50t
  Squid_Cache              Data   -wc-ao---  128.00g
  Sys                      Data   -wc-a----   96.00g
  Sysboot                  Data   -wc-a----    4.00g
  Sysvar                   Data   -wc-a----   28.00g
  UsrShare                 Data   -wc-ao---   50.00g
  Win                      Data   -wi-a----    1.00t
  cptest                   Data   -wi-ao---    5.00g
---
As you can see, I have the 1 snapshot of home (that gets used to
generate the time-labeled snaps)....

So maybe try adding a "2nd" mount of the file systems right after udev & lvm
activate.  My startup scripts before I do the mount:
S01boot.sysctl@
S01boot.usr-mount@
S02boot.udev@
S04boot.lvm@
--- in the lvm script is where I added the early mount w/fork.
(I did have to make sure my mount/fsck order in fstab was correct.  Since the disks are all XFS, the fsck is mostly a no-op, so I usually point fsck.xfs at /bin/true.

Anyway, it's a fairly trivial test...

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1396213

Title:
  LVM VG is not activated during system boot

Status in One Hundred Papercuts:
  Confirmed
Status in initramfs-tools package in Ubuntu:
  Confirmed
Status in linux package in Ubuntu:
  Confirmed
Status in lvm2 package in Ubuntu:
  Confirmed

Bug description:
  Hi all,

  I open this report based on the linked conversation I had on the linux-lvm mailing list, and the Ask Ubuntu question I posted regarding this case.
  https://www.redhat.com/archives/linux-lvm/2014-November/msg00023.html
  https://www.redhat.com/archives/linux-lvm/2014-November/msg00024.html
  http://askubuntu.com/questions/542656/lvm-vg-is-not-activated-during-system-boot

  I have 2 VGs on my system, and for some reason, only one of them gets
  activated during the initrd boot sequence, which doesn't have my root
  LV, so my boot sequence halts with an initrd prompt.

  When I get to the initrd BusyBox prompt, I can see my LVs are inactive
  with "lvm lvscan" – then, "lvm vgchange -ay" brings them online. The
  boot sequence continues as I exit the BusyBox prompt. The expected
  behaviour would be that both VGs should activate automatically.

  On LVM mailing list, I've been advised it may be a problem with Ubuntu
  initrd scripts, hence I report this problem here. (I'm not sure if I'm
  reporting it to the correct place by assigning it to "linux", but I
  didn't find a package directly related to the initrd only, so I
  assumed initrd scripts are maintained by the kernel team. If you think
  it's an error, and know the correct package, please reassign!) Our
  suspicion is, initrd prematurely issues "vgchange -ay" before all the
  PVs come online. This makes sense.

  I already tried to set the "rootdelay" kernel parameter to make initrd
  wait longer for the boot LV to come up, but it didn't work. Note
  however, it worked with previous kernel versions.

  The problem came up when I upgraded to Utopic, and got kernel vmlinuz-3.16.0-23-generic. When I boot with the old kernel from Trusty (vmlinuz-3.13.0-37-generic), it works fine.
  ---
  AlsaVersion: Advanced Linux Sound Architecture Driver Version k3.16.0-23-generic.
  AplayDevices: Error: [Errno 2] No such file or directory
  ApportVersion: 2.14.7-0ubuntu8
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/by-path', '/dev/snd/pcmC1D2p', '/dev/snd/pcmC1D1c', '/dev/snd/pcmC1D0c', '/dev/snd/pcmC1D0p', '/dev/snd/controlC1', '/dev/snd/hwC0D0', '/dev/snd/pcmC0D3p', '/dev/snd/controlC0', '/dev/snd/timer'] failed with exit code 1:
  CRDA: Error: [Errno 2] No such file or directory
  Card0.Amixer.info: Error: [Errno 2] No such file or directory
  Card0.Amixer.values: Error: [Errno 2] No such file or directory
  Card1.Amixer.info: Error: [Errno 2] No such file or directory
  Card1.Amixer.values: Error: [Errno 2] No such file or directory
  DistroRelease: Ubuntu 14.10
  HibernationDevice: RESUME=/dev/mapper/vmhost--vg-vmhost--swap0
  InstallationDate: Installed on 2013-12-06 (354 days ago)
  InstallationMedia: Ubuntu-Server 13.10 "Saucy Salamander" - Release amd64 (20131016)
  Lsusb:
   Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
   Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
  MachineType: WinFast 6150M2MA
  Package: linux (not installed)
  ProcEnviron:
   LANGUAGE=en_US:en
   TERM=xterm-256color
   PATH=(custom, no user)
   LANG=en_US.UTF-8
   SHELL=/bin/bash
  ProcFB: 0 radeondrmfb
  ProcKernelCmdLine: BOOT_IMAGE=/vmlinuz-3.16.0-23-generic root=/dev/mapper/hostname--vg-hostname--rootfs ro rootflags=subvol=@ rootdelay=300
  ProcVersionSignature: Ubuntu 3.16.0-23.31-generic 3.16.4
  RelatedPackageVersions:
   linux-restricted-modules-3.16.0-23-generic N/A
   linux-backports-modules-3.16.0-23-generic  N/A
   linux-firmware                             1.138
  RfKill: Error: [Errno 2] No such file or directory
  Tags:  utopic utopic
  Uname: Linux 3.16.0-23-generic x86_64
  UnreportableReason: The report belongs to a package that is not installed.
  UpgradeStatus: Upgraded to utopic on 2014-10-28 (28 days ago)
  UserGroups:

  _MarkForUpload: True
  dmi.bios.date: 01/19/2008
  dmi.bios.vendor: Phoenix Technologies, LTD
  dmi.bios.version: 686W1D28
  dmi.board.name: 6150M2MA
  dmi.board.vendor: WinFast
  dmi.board.version: FAB2.0
  dmi.chassis.type: 3
  dmi.chassis.vendor: WinFast
  dmi.modalias: dmi:bvnPhoenixTechnologies,LTD:bvr686W1D28:bd01/19/2008:svnWinFast:pn6150M2MA:pvrFAB2.0:rvnWinFast:rn6150M2MA:rvrFAB2.0:cvnWinFast:ct3:cvr:
  dmi.product.name: 6150M2MA
  dmi.product.version: FAB2.0
  dmi.sys.vendor: WinFast
  ---
  ApportVersion: 2.14.7-0ubuntu8
  Architecture: amd64
  DistroRelease: Ubuntu 14.10
  InstallationDate: Installed on 2013-12-06 (355 days ago)
  InstallationMedia: Ubuntu-Server 13.10 "Saucy Salamander" - Release amd64 (20131016)
  Package: linux (not installed)
  ProcEnviron:
   LANGUAGE=en_US:en
   TERM=linux
   PATH=(custom, no user)
   LANG=en_US.UTF-8
   SHELL=/bin/bash
  Tags:  utopic
  Uname: Linux 3.18.0-031800rc6-generic x86_64
  UnreportableReason: The running kernel is not an Ubuntu kernel
  UpgradeStatus: Upgraded to utopic on 2014-10-28 (29 days ago)
  UserGroups:

  _MarkForUpload: True

To manage notifications about this bug go to:
https://bugs.launchpad.net/hundredpapercuts/+bug/1396213/+subscriptions


References