kernel-packages team mailing list archive
-
kernel-packages team
-
Mailing list archive
-
Message #80999
[Bug 1371526] Re: ceph-disk-prepare command always fails; new partition table not avaliable until reboot
We would really need information about setting up a system that runs
into issues. In particular, how is the cloud-init ephemeral disk
created? I still cannot reproduce this (making an ext3 fs outside and
put something into it, then mount it by label from the guest, unmount
it, mount it again and write something, all works ok). So we need as
much info about what is done to the ephemeral disk from the start to the
point where umount fails.
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1371526
Title:
ceph-disk-prepare command always fails; new partition table not
avaliable until reboot
Status in “ceph” package in Ubuntu:
New
Status in “linux” package in Ubuntu:
Incomplete
Bug description:
$ sudo ceph-disk-prepare --fs-type xfs --zap-disk /dev/vdb
Caution: invalid backup GPT header, but valid main header; regenerating
backup header from main header.
Warning! Main and backup partition tables differ! Use the 'c' and 'e' options
on the recovery & transformation menu to examine the two tables.
Warning! One or more CRCs don't match. You should repair the disk!
****************************************************************************
Caution: Found protective or hybrid MBR and corrupt GPT. Using GPT, but disk
verification and recovery are STRONGLY recommended.
****************************************************************************
Warning: The kernel is still using the old partition table.
The new table will be used at the next reboot.
GPT data structures destroyed! You may now partition the disk using fdisk or
other utilities.
Warning: The kernel is still using the old partition table.
The new table will be used at the next reboot.
The operation has completed successfully.
Warning: The kernel is still using the old partition table.
The new table will be used at the next reboot.
The operation has completed successfully.
Warning: The kernel is still using the old partition table.
The new table will be used at the next reboot.
The operation has completed successfully.
mkfs.xfs: cannot open /dev/vdb1: Device or resource busy
ceph-disk: Error: Command '['/sbin/mkfs', '-t', 'xfs', '-f', '-i', 'size=2048', '--', '/dev/vdb1']' returned non-zero exit status 1
I can reproduce this consistently across ceph nodes; also impacts on
the way we use swift for storage as well.
ProblemType: Bug
DistroRelease: Ubuntu 14.10
Package: ceph 0.80.5-1
ProcVersionSignature: User Name 3.16.0-16.22-generic 3.16.2
Uname: Linux 3.16.0-16-generic x86_64
ApportVersion: 2.14.7-0ubuntu2
Architecture: amd64
Date: Fri Sep 19 09:39:18 2014
Ec2AMI: ami-00000084
Ec2AMIManifest: FIXME
Ec2AvailabilityZone: nova
Ec2InstanceType: m1.small
Ec2Kernel: aki-00000002
Ec2Ramdisk: ari-00000002
SourcePackage: ceph
UpgradeStatus: No upgrade log present (probably fresh install)
---
ApportVersion: 2.14.7-0ubuntu2
Architecture: amd64
DistroRelease: Ubuntu 14.10
Ec2AMI: ami-00000084
Ec2AMIManifest: FIXME
Ec2AvailabilityZone: nova
Ec2InstanceType: m1.small
Ec2Kernel: aki-00000002
Ec2Ramdisk: ari-00000002
Package: linux
PackageArchitecture: amd64
ProcVersionSignature: Ubuntu 3.16.0-16.22-generic 3.16.2
Tags: utopic ec2-images
Uname: Linux 3.16.0-16-generic x86_64
UpgradeStatus: No upgrade log present (probably fresh install)
UserGroups:
_MarkForUpload: True
To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1371526/+subscriptions