I just bought a couple 4TB WD Reds and for some reason when I try to mirror them the resulting array is only 2198.9 GB large.
Both disks are formatted as Linux RAID Autodetect (ID fd) with fdisk, and the command used to make the array is:
mdadm --create /dev/md/mirror0 --level=mirror --raid-devices=2 /dev/sdc1 /dev/sdd1
fdisk -l
output:
Disk /dev/sdb: 4000.8 GB, 4000787030016 bytes
255 heads, 63 sectors/track, 486401 cylinders, total 7814037168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/sdb1 1 4294967295 2147483647+ ee GPT
Partition 1 does not start on physical sector boundary.
Disk /dev/sda: 120.0 GB, 120034123776 bytes
255 heads, 63 sectors/track, 14593 cylinders, total 234441648 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0002868b
Device Boot Start End Blocks Id System
/dev/sda1 * 2048 218292223 109145088 83 Linux
/dev/sda2 218294270 234440703 8073217 5 Extended
/dev/sda5 218294272 234440703 8073216 82 Linux swap / Solaris
Disk /dev/sdc: 4000.8 GB, 4000787030016 bytes
90 heads, 3 sectors/track, 28940878 cylinders, total 7814037168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0xa36de11e
Device Boot Start End Blocks Id System
/dev/sdc1 2048 4294967294 2147482623+ fd Linux raid autodetect
Disk /dev/sdd: 4000.8 GB, 4000787030016 bytes
90 heads, 3 sectors/track, 28940878 cylinders, total 7814037168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x8708ffe6
Device Boot Start End Blocks Id System
/dev/sdd1 2048 4294967294 2147482623+ fd Linux raid autodetect
Disk /dev/md127: 2198.9 GB, 2198887792640 bytes
2 heads, 4 sectors/track, 536837840 cylinders, total 4294702720 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x00000000
What am I doing wrong?
Your partitions are in fact only 2T in size - so the mirror created on top of them is similarly only 2T in size. The total sectors are almost double what are assigned to the partition.
Compare the information about the full device:
Disk /dev/sdd: 4000.8 GB, 4000787030016 bytes
90 heads, 3 sectors/track, 28940878 cylinders, total 7814037168 sectors
With that of the raid:
Disk /dev/md127: 2198.9 GB, 2198887792640 bytes
2 heads, 4 sectors/track, 536837840 cylinders, total 4294702720 sectors
Sectors are 512 bytes, so 512 * 7814037168 = 4000787030016 or 4T.
Your partitions span sectors 1-4294967295 = 512 * 4294967295 or about 2.2T.
"Blocks" are 1k blocks, so you should see partition sizes in the neighborhood of 4000000000 blocks (more like 3.9 billion blocks actually) for your disk size.
The problem here is that you're using fdisk; it does not support partitions greater than 2T (or, more correctly, greater than 2^31 blocks). To create larger partitions, use parted instead.
If you rebuild the partitions using the whole disk and then create your mirror again with the same command you showed here and it should be fine.
From the output for /dev/sdb
, we can assume you have a non-GPT-capable variant of fdisk
. As such, your RAID disks are partitioned using MBR (because they’re visible). A MBR partition can have a maximum of 4294967295 sectors. With 512-byte sectors, that’s 2 TiB - 512 Byte.
The solution is simple, too: Use GPT.
User contributions licensed under CC BY-SA 3.0