Raid 5 the hard disks on Hetzner

-2

Hi ordered a server from Hetzner and added a 500GB SSD to the server. Ran the installerimage and Im not sure Software Raid is working on all three Hard Drives of mine. How can I add Soft Raid to the newly added SSD as well?

I don't mind reinstalling the server.

Hard Disks I have 2 x 1TB SATA 1 x 500GB SSD

Here are my configs

df -h Output

[root@CentOS-610-64-minimal ~]# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/md2        906G  886M  859G   1% /
tmpfs            16G     0   16G   0% /dev/shm
/dev/md1        496M   35M  436M   8% /boot
[root@CentOS-610-64-minimal ~]#

fdisk -l output

[root@CentOS-610-64-minimal ~]# fdisk -l

Disk /dev/sdb: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xca606b93

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1        2089    16777216   fd  Linux raid autodetect
/dev/sdb2            2089        2155      524288   fd  Linux raid autodetect
/dev/sdb3            2155       62261   482804056   fd  Linux raid autodetect

Disk /dev/sdc: 512.1 GB, 512110190592 bytes
255 heads, 63 sectors/track, 62260 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x8b577ece

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1               1        2089    16777216   fd  Linux raid autodetect
/dev/sdc2            2089        2155      524288   fd  Linux raid autodetect
/dev/sdc3            2155       62261   482804056   fd  Linux raid autodetect

Disk /dev/sda: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x595cad86

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1               1        2089    16777216   fd  Linux raid autodetect
/dev/sda2            2089        2155      524288   fd  Linux raid autodetect
/dev/sda3            2155       62261   482804056   fd  Linux raid autodetect

Disk /dev/md1: 536 MB, 536805376 bytes
2 heads, 4 sectors/track, 131056 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Disk /dev/md0: 17.2 GB, 17179738112 bytes
2 heads, 4 sectors/track, 4194272 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Disk /dev/md2: 988.8 GB, 988782002176 bytes
2 heads, 4 sectors/track, 241401856 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 524288 bytes / 1048576 bytes
Disk identifier: 0x00000000

cat /proc/mdstat Output

[root@CentOS-610-64-minimal ~]# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md2 : active raid5 sda3[0] sdb3[1] sdc3[3]
      965607424 blocks super 1.0 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
      bitmap: 0/4 pages [0KB], 65536KB chunk

md0 : active raid1 sda1[0] sdb1[1] sdc1[2]
      16777088 blocks super 1.0 [3/3] [UUU]

md1 : active raid1 sda2[0] sdb2[1] sdc2[2]
      524224 blocks [3/3] [UUU]

unused devices: <none>

mdadm -D /dev/md0 Output

[root@CentOS-610-64-minimal ~]# mdadm -D /dev/md0
/dev/md0:
        Version : 1.0
  Creation Time : Sat Oct  6 04:49:31 2018
     Raid Level : raid1
     Array Size : 16777088 (16.00 GiB 17.18 GB)
  Used Dev Size : 16777088 (16.00 GiB 17.18 GB)
   Raid Devices : 3
  Total Devices : 3
    Persistence : Superblock is persistent

    Update Time : Sat Oct  6 06:02:45 2018
          State : clean
 Active Devices : 3
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 0

           Name : rescue:0
           UUID : b4cf051f:22b30734:e45d5bca:cfff80e8
         Events : 21

    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync   /dev/sda1
       1       8       17        1      active sync   /dev/sdb1
       2       8       33        2      active sync   /dev/sdc1

mdadm -D /dev/md1 Output

[root@CentOS-610-64-minimal ~]# mdadm -D /dev/md1
/dev/md1:
        Version : 0.90
  Creation Time : Sat Oct  6 04:49:31 2018
     Raid Level : raid1
     Array Size : 524224 (511.94 MiB 536.81 MB)
  Used Dev Size : 524224 (511.94 MiB 536.81 MB)
   Raid Devices : 3
  Total Devices : 3
Preferred Minor : 1
    Persistence : Superblock is persistent

    Update Time : Sat Oct  6 04:53:41 2018
          State : active
 Active Devices : 3
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 0

           UUID : f1fd684a:98b3c1eb:776c2c25:004bd7b2
         Events : 0.23

    Number   Major   Minor   RaidDevice State
       0       8        2        0      active sync   /dev/sda2
       1       8       18        1      active sync   /dev/sdb2
       2       8       34        2      active sync   /dev/sdc2

mdadm -D /dev/md2 Output

[root@CentOS-610-64-minimal ~]# mdadm -D /dev/md2
/dev/md2:
        Version : 1.0
  Creation Time : Sat Oct  6 04:49:37 2018
     Raid Level : raid5
     Array Size : 965607424 (920.88 GiB 988.78 GB)
  Used Dev Size : 482803712 (460.44 GiB 494.39 GB)
   Raid Devices : 3
  Total Devices : 3
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Sat Oct  6 11:02:41 2018
          State : clean
 Active Devices : 3
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

           Name : rescue:2
           UUID : 6ebb511f:a7000ca5:c98b1501:4d2b3707
         Events : 1330

    Number   Major   Minor   RaidDevice State
       0       8        3        0      active sync   /dev/sda3
       1       8       19        1      active sync   /dev/sdb3
       3       8       35        2      active sync   /dev/sdc3

Hetzner InstallerImage File

DRIVE1 /dev/sda
DRIVE2 /dev/sdb
DRIVE3 /dev/sdc

SWRAID 1
SWRAIDLEVEL 5

PART swap swap 16G
PART /boot ext3 512M
PART / ext4 all
linux
centos
linux-networking
software-raid
hetzner
asked on Server Fault Oct 6, 2018 by sweatbar

1 Answer

0

Your output shows that you have 2*2TB disks and that you have one RAID5 and two RAID1.

md2 : active raid5 
md0 : active raid1
md1 : active raid1

As was mentioned in the comments, one SSD in a RAID5 together with two conventional disks doesn't make much sense.

I recommend a RAID1 with the SSD and the spinning disks set to write-mostly.

You create a RAID1 with the SSD and 500MB of the two other disks with the options --bitmap=internal /dev/ssd --write-mostly --write-behind /dev/disk1 /dev/disk2. See man mdadm for details.

This will write everything to the SSD and eventually to the spinning disks. Reads will be from the fast SSD, unless the SSD fails, only then would data be read from the other disks. You get fast read and write from the SSD together with the mirror on the other disks in case the SSD fails.

The other 1,5GB on the spinning disks can be combined into another RAID1 for data that doesn't need fast access and doesn't fit to the 0,5GB SSD RAID.

answered on Server Fault Oct 6, 2018 by RalfFriedl • edited Oct 7, 2018 by RalfFriedl

User contributions licensed under CC BY-SA 3.0