kernel-packages team mailing list archive
-
kernel-packages team
-
Mailing list archive
-
Message #44051
[Bug 27037] Re: mdadm cannot assemble array as cannot open drive with O_EXCL
This "ubuntu feature" has even earned a special note at linux-raid wiki:
https://raid.wiki.kernel.org/index.php/RAID_setup#Saving_your_RAID_configuration
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/27037
Title:
mdadm cannot assemble array as cannot open drive with O_EXCL
Status in “linux” package in Ubuntu:
Fix Released
Bug description:
Further to discussions (well, monologue!) on the forums here - http://ubuntuforums.org/
showthread.php?p=563255&posted=1
I've had two raid arrays working on my Breezy machine for several
months.
/dev/md0 is a raid 5 built from /dev/sd[a-d]
/dev/md1 is a raid 0 built from /dev/sd[e-h]
I rebooted the server so I could change the power leads around and /dev/md1 won't assemble - it
says no drives. I have recently done a dist-upgrade and don't recall if I rebooted afterwards.
/dev/md0 is working fine.
---------------------------------------------------
Here is what I get if I try manually -
sudo mdadm --assemble /dev/md1 /dev/sd[e-h]
mdadm: cannot open device /dev/sde: Device or resource busy
mdadm: /dev/sde has no superblock - assembly aborted
---------------------------------------------------
But --examine is happy with all the drives that make up the array
sudo mdadm --examine /dev/sde
/dev/sde:
Magic : a92b4efc
Version : 00.90.01
UUID : 22522c98:40ff7e71:c16d6be5:d6401d24
Creation Time : Fri May 20 13:56:01 2005
Raid Level : raid0
Device Size : 293036096 (279.46 GiB 300.07 GB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 1
Update Time : Fri May 20 13:56:01 2005
State : active
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Checksum : 3abea2a9 - correct
Events : 0.1
Chunk Size : 64K
Number Major Minor RaidDevice State
this 3 8 112 3 active sync /dev/sdh
0 0 8 64 0 active sync /dev/sde
1 1 8 80 1 active sync /dev/sdf
2 2 8 96 2 active sync /dev/sdg
3 3 8 112 3 active sync /dev/sdh
The array definitely doesn't exist as this shows -
cat /proc/mdstat
Personalities : [raid5]
md0 : active raid5 sda[3] sdb[2] sdd[1] sdc[0]
732595392 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
unused devices: <none>
---------------------------------------------------
More info from using strace
open("/dev/sde", O_RDONLY|O_EXCL) = -1 EBUSY (Device or resource busy)
write(2, "mdadm: cannot open device /dev/s"..., 60mdadm: cannot open device /dev/sde: Device or
resource busy
) = 60
write(2, "mdadm: /dev/sde has wrong uuid.\n", 32mdadm: /dev/sde has wrong uuid.
) = 32
It looks like the exclusive open to sde is failing. I tried using lsof to see what else had sde
open but can't see anything.
All ideas welcome, but I'm really worried that I might do something to get my /dev/md0 array
into the same state, and this is required 24x7.
Note that this machine has been upgraded from Hoary, and I've also renumbered the mdadm-raid
script in /etc/rcS.d as with it in the default place it was running before hotplug.
I'm reporting this as a kernel-package bug as I really don't know
where else to put it!
Thanks
Ian
To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/27037/+subscriptions