I'm having some problems with my disks; long story.. but right now I'm trying to mount a device that was part of a 2-disk raid1. For that, I tried to assemble it into a new array, but...
% mdadm -Av /dev/md2 /dev/sdc1
mdadm: looking for devices for /dev/md2
mdadm: /dev/sdc1 is identified as a member of /dev/md2, slot 2.
mdadm: No suitable drives found for /dev/md2
I was able to do a similar thing earlier, but right now it's not working with this device, and I can't figure out why.
More information:
% mdadm -E /dev/sdc1
/dev/sdc1:
Magic : a92b4efc
Version : 0.90.00
UUID : 0bf001f2:31c5e4d1:c44c77eb:7ee19756 (local to host sysresccd)
Creation Time : Thu Mar 12 16:43:17 2009
Raid Level : raid1
Used Dev Size : 51199040 (48.83 GiB 52.43 GB)
Array Size : 51199040 (48.83 GiB 52.43 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 124
Update Time : Sat Feb 23 17:44:08 2013
State : clean
Active Devices : 1
Working Devices : 2
Failed Devices : 1
Spare Devices : 1
Checksum : c9e77cf6 - correct
Events : 16015185
Number Major Minor RaidDevice State
this 2 8 33 2 spare /dev/sdc1
0 0 8 49 0 active sync /dev/sdd1
1 1 0 0 1 faulty removed
2 2 8 33 2 spare /dev/sdc1
% fdisk -l /dev/sdc
Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/sdc1 * 63 102398309 51199123+ fd Linux raid autodetect
/dev/sdc2 102398310 110398679 4000185 82 Linux swap / Solaris
/dev/sdc3 110398680 976768064 433184692+ fd Linux raid autodetect
Hmm.. now I see sdc1 appears as a spare, weird.
sdd1 is the other half of the array. I was able to mount them separately earlier, and they both passed fsck and the file data was readable. When trying to put them together again, resync failed due to a bad block on sdd (in free space, I assume). I'm not sure if sdc1 has any bad blocks.
Try with missing
in place of missing drives to activate degraded array.
mdadm -Av /dev/md2 /dev/sdc1 missing
I don't think you can easily create same array twice in the system where both are degraded.
cat /proc/mdstat
Will list your currently active raid devices.
If the raid with sdd1
is ok then dry:
mdadm --manage /dev/md2 --add /dev/sdc1
# or
mdadm --manage /dev/md2 --re-add /dev/sdc1
(when /dev/md2 is the raid device with sdd1
)
User contributions licensed under CC BY-SA 3.0