I have an MDADM array with four 1TB disks in a RAID5 configuration.
Here's the relevant part of
mdadm --detail /dev/md1:
Version : 00.90 Raid Level : raid5 Array Size : 2929537920 (2793.83 GiB 2999.85 GB) Used Dev Size : 976512640 (931.28 GiB 999.95 GB) Raid Devices : 4 Total Devices : 4 Preferred Minor : 1 Persistence : Superblock is persistent
It shows that the array is 2.8TB. Here is
fdisk -l /dev/md1:
Disk /dev/md1: 2999.8 GB, 2999846830080 bytes 2 heads, 4 sectors/track, 732384480 cylinders Units = cylinders of 8 * 512 = 4096 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 65536 bytes / 196608 bytes Disk identifier: 0xffffffff Disk /dev/md1 doesn't contain a valid partition table
fdisk also shows it to be 3TB. However,
df -Th does not agree:
Filesystem Type Size Used Avail Use% Mounted on /dev/md1 ext3 1.8T 1.8T 0 100% /
Why does everything show
/dev/md1 to be 3TB but the space is limited to only 2TB?
Other details: Ubuntu 10.10
$ tune2fs -l /dev/md1:
tune2fs 1.41.12 (17-May-2010) Filesystem volume name: <none> Last mounted on: <not available> Filesystem UUID: 1b4e9420-61ee-4ffd-817a-28831f2aeaf2 Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery sparse_super large_file Filesystem flags: signed_directory_hash Default mount options: (none) Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 122068992 Block count: 488255968 Reserved block count: 24412798 Free blocks: 69746292 Free inodes: 121882953 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 907 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8192 Inode blocks per group: 512 RAID stride: 16 RAID stripe width: 32 Filesystem created: Sun Oct 17 01:14:23 2010 Last mount time: Mon Dec 6 01:24:31 2010 Last write time: Mon Dec 6 00:45:41 2010 Mount count: 12 Maximum mount count: 23 Last checked: Sun Dec 5 20:37:35 2010 Check interval: 15552000 (6 months) Next check after: Fri Jun 3 21:37:35 2011 Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 First orphan inode: 55550208 Default directory hash: half_md4 Directory Hash Seed: 2a386fcb-b24f-4f40-bf4c-7c03489b086c Journal backup: inode blocks
Did you add the 4th drive after creating the filesystem? That could lead to the FS being smaller than the underlying device.
Alternatively, maybe your ext3 has 1k blocks, as this would lead to a 2TB filesystem size limit (ext3 size limits).
Please post the output of "sudo tune2fs -l /dev/md1", and look closely at the Block size line.
User contributions licensed under CC BY-SA 3.0