← Back to team overview

kernel-packages team mailing list archive

[Bug 1371526] Re: ceph-disk-prepare command always fails; new partition table not avaliable until reboot

 

I poked at this with some additional volumes attached to the cloud
instance:

Offending device:

ubuntu@juju-t-machine-22:~$ sudo umount /dev/vdb
ubuntu@juju-t-machine-22:~$ sudo lsof | grep vdb
jbd2/vdb-  1268             root  cwd       DIR              253,1     4096          2 /
jbd2/vdb-  1268             root  rtd       DIR              253,1     4096          2 /
jbd2/vdb-  1268             root  txt   unknown                                        /proc/1268/exe

Additional device:

ubuntu@juju-t-machine-22:~$ sudo mount /dev/vdc /mnt2
ubuntu@juju-t-machine-22:~$ sudo lsof | grep vdc
jbd2/vdc- 16058             root  cwd       DIR              253,1     4096          2 /
jbd2/vdc- 16058             root  rtd       DIR              253,1     4096          2 /
jbd2/vdc- 16058             root  txt   unknown                                        /proc/16058/exe
ubuntu@juju-t-machine-22:~$ sudo umount /dev/vdc
ubuntu@juju-t-machine-22:~$ sudo lsof | grep vdc

As you can see, the jbd2 process for vdb appears to hang around, which I
think is what is keeping the partition table locked in kernel and hence
stale.


** Also affects: linux (Ubuntu)
   Importance: Undecided
       Status: New

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1371526

Title:
  ceph-disk-prepare command always fails; new partition table not
  avaliable until reboot

Status in “ceph” package in Ubuntu:
  New
Status in “linux” package in Ubuntu:
  New

Bug description:
  $ sudo ceph-disk-prepare --fs-type xfs --zap-disk /dev/vdb
  Caution: invalid backup GPT header, but valid main header; regenerating
  backup header from main header.

  Warning! Main and backup partition tables differ! Use the 'c' and 'e' options
  on the recovery & transformation menu to examine the two tables.

  Warning! One or more CRCs don't match. You should repair the disk!

  ****************************************************************************
  Caution: Found protective or hybrid MBR and corrupt GPT. Using GPT, but disk
  verification and recovery are STRONGLY recommended.
  ****************************************************************************
  Warning: The kernel is still using the old partition table.
  The new table will be used at the next reboot.
  GPT data structures destroyed! You may now partition the disk using fdisk or
  other utilities.
  Warning: The kernel is still using the old partition table.
  The new table will be used at the next reboot.
  The operation has completed successfully.
  Warning: The kernel is still using the old partition table.
  The new table will be used at the next reboot.
  The operation has completed successfully.
  Warning: The kernel is still using the old partition table.
  The new table will be used at the next reboot.
  The operation has completed successfully.
  mkfs.xfs: cannot open /dev/vdb1: Device or resource busy
  ceph-disk: Error: Command '['/sbin/mkfs', '-t', 'xfs', '-f', '-i', 'size=2048', '--', '/dev/vdb1']' returned non-zero exit status 1

  I can reproduce this consistently across ceph nodes; also impacts on
  the way we use swift for storage as well.

  ProblemType: Bug
  DistroRelease: Ubuntu 14.10
  Package: ceph 0.80.5-1
  ProcVersionSignature: User Name 3.16.0-16.22-generic 3.16.2
  Uname: Linux 3.16.0-16-generic x86_64
  ApportVersion: 2.14.7-0ubuntu2
  Architecture: amd64
  Date: Fri Sep 19 09:39:18 2014
  Ec2AMI: ami-00000084
  Ec2AMIManifest: FIXME
  Ec2AvailabilityZone: nova
  Ec2InstanceType: m1.small
  Ec2Kernel: aki-00000002
  Ec2Ramdisk: ari-00000002
  SourcePackage: ceph
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1371526/+subscriptions