Ubuntu 16.04 - frozen mdadm array

3

I had a working RAID5 array consisting of 6 4TB disks. Smartd reported that one of the disks started failing. I decided to do several things in one operation: 1) remove the failing disk 2) add new one to replace it 3) add a few more disks to the array and grow it

Since I only had smaller disks for (3) I used LVM to join smaller disks in volumes that were larger than 4TB

Here's the sequence of what I ran:

1) vgcreate vg_sdi_sdj /dev/sdi1 /dev/sdj1
2) vgcreate vg_sdj_sdl /dev/sdk1 /dev/sdl1
3) lvcreate -l 100%FREE -n all vg_sdi_sdj
4) lvcreate -l 100%FREE -n all vg_sdk_sdl
5) mdadm --manage /dev/md1 --add /dev/sdg1
6) mdadm --manage /dev/md1 --add /dev/vg_sdi_sdj/all
7) mdadm --manage /dev/md1 --add /dev/vg_sdk_sdl/all
8) mdadm --manage /dev/md1 --fail /dev/sdc1
9) mdadm --grow --raid-devices=8 --backup-file=/home/andrei/grow_md1.bak /dev/md1

At first everything was going almost smoothly. Array started rebuilding. The only oddity was that backup file wasn't created. I was running

watch -n 1 mdadm --detail /dev/md1
nmon

in background to keep eye on things. While rebuild was going on I could access the array.

However 9% into the process, all I/O on the array stopped except 100% reads on /dev/sdb and /dev/sdb1. Once I killed watch -n 1 mdadm, that stopped too.

Here's recent output from mdadm --detail:

/dev/md1:
Version : 1.2   Creation Time : Sun Jan  8 22:16:01 2017
Raid Level : raid5
Array Size : 19534430720 (18629.49 GiB 20003.26 GB)   Used Dev Size : 3906886144 (3725.90 GiB 4000.65 GB)    Raid Devices : 8   Total
Devices : 8
Persistence : Superblock is persistent

Intent Bitmap : Internal

Update Time : Sun Jan 15 21:38:17 2017
      State : clean, degraded, reshaping  Active Devices : 7 Working Devices : 8  Failed Devices : 0   Spare Devices : 1

     Layout : left-symmetric
 Chunk Size : 512K

 Reshape Status : 9% complete   Delta Devices : 2, (6->8)

       Name : server:1  (local to host server)
       UUID : bec66f95:2975e7ae:8f8ba15c:8eb3a33f
     Events : 79504

Number   Major   Minor   RaidDevice State
   0       8       17        0      active sync   /dev/sdb1
   9     252        0        1      spare rebuilding   /dev/dm-0
   2       8       49        2      active sync   /dev/sdd1
   3       8      145        3      active sync   /dev/sdj1
   4       8      161        4      active sync   /dev/sdk1
   6       8      177        5      active sync   /dev/sdl1
   8     252        1        6      active sync   /dev/dm-1
   7       8      129        7      active sync   /dev/sdi1

I couldn't do any I/O on the array. Running htop showed one CPU core pegged 100% doing I/O operations.

I rebooted the machine. Array didn't re-assemble. I re-assembled it manually by running:

mdadm --assemble /dev/md1 --force /dev/sdb1 /dev/sdd1 /dev/sdi1 /dev/sdj1 /dev/sdk1 /dev/sdl1 /dev/vg_sdi_sdj/all /dev/vg_sdk_sdl/all

(after reboot disks changed names). However lvm correctly found volumes and groups and brought them up.

Without force it wouldn't play ball. It did assemble and showed the --detail report quoted above.

However it still wouldn't allow any I/O so mount command froze (I single disk LVM there and ext4 filesystem inside). htop also showed one CPU core pegged with I/O.

However none of the disk activity LEDs are on.

At the moment I'm stuck with a non-functional array that has a good amount of data in it. Ideally I'd like to retrieve the data.

Perhaps using LVM logical volumes as mdadm "disks" was a mistake. Though I didn't find any info indicating that it would not work.

I would really appreciate any advice and pointers on how to recover my array.

Closer look at journalctl -xe revealed following:

Jan 15 22:41:15 server sudo[1612]:   andrei : TTY=tty1 ; PWD=/home/andrei ; USER=root ; COMMAND=/sbin/mdadm --assemble /dev/md1 --force /dev/sdb1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdg1 /dev/sdh1 /dev/vg_sdi_sdj/all /dev/vg_sdk_sdl/all
Jan 15 22:41:15 server sudo[1612]: pam_unix(sudo:session): session opened for user root by andrei(uid=0)
Jan 15 22:41:15 server kernel: md: md1 stopped.
Jan 15 22:41:15 server kernel: md: bind<dm-1>
Jan 15 22:41:15 server kernel: md: bind<sdd1>
Jan 15 22:41:15 server kernel: md: bind<sdg1>
Jan 15 22:41:15 server kernel: md: bind<sdh1>
Jan 15 22:41:15 server kernel: md: bind<sdf1>
Jan 15 22:41:15 server kernel: md: bind<dm-0>
Jan 15 22:41:15 server kernel: md: bind<sde1>
Jan 15 22:41:15 server kernel: md: bind<sdb1>
Jan 15 22:41:15 server mdadm[879]: NewArray event detected on md device /dev/md1
Jan 15 22:41:15 server mdadm[879]: DegradedArray event detected on md device /dev/md1
Jan 15 22:41:15 server kernel: md/raid:md1: reshape will continue
Jan 15 22:41:15 server kernel: md/raid:md1: device sdb1 operational as raid disk 0
Jan 15 22:41:15 server kernel: md/raid:md1: device sde1 operational as raid disk 7
Jan 15 22:41:15 server kernel: md/raid:md1: device dm-0 operational as raid disk 6
Jan 15 22:41:15 server kernel: md/raid:md1: device sdf1 operational as raid disk 5
Jan 15 22:41:15 server kernel: md/raid:md1: device sdh1 operational as raid disk 4
Jan 15 22:41:15 server kernel: md/raid:md1: device sdg1 operational as raid disk 3
Jan 15 22:41:15 server kernel: md/raid:md1: device sdd1 operational as raid disk 2
Jan 15 22:41:15 server kernel: md/raid:md1: allocated 8606kB
Jan 15 22:41:15 server kernel: md/raid:md1: raid level 5 active with 7 out of 8 devices, algorithm 2
Jan 15 22:41:15 server kernel: RAID conf printout:
Jan 15 22:41:15 server kernel:  --- level:5 rd:8 wd:7
Jan 15 22:41:15 server kernel:  disk 0, o:1, dev:sdb1
Jan 15 22:41:15 server kernel:  disk 1, o:1, dev:dm-1
Jan 15 22:41:15 server kernel:  disk 2, o:1, dev:sdd1
Jan 15 22:41:15 server kernel:  disk 3, o:1, dev:sdg1
Jan 15 22:41:15 server kernel:  disk 4, o:1, dev:sdh1
Jan 15 22:41:15 server kernel:  disk 5, o:1, dev:sdf1
Jan 15 22:41:15 server kernel:  disk 6, o:1, dev:dm-0
Jan 15 22:41:15 server kernel:  disk 7, o:1, dev:sde1
Jan 15 22:41:15 server kernel: created bitmap (30 pages) for device md1
Jan 15 22:41:15 server kernel: md1: bitmap initialized from disk: read 2 pages, set 7 of 59615 bits
Jan 15 22:41:16 server kernel: md1: detected capacity change from 0 to 20003257057280
Jan 15 22:41:16 server kernel: md: reshape of RAID array md1
Jan 15 22:41:16 server kernel: md: minimum _guaranteed_  speed: 1000 KB/sec/disk.
Jan 15 22:41:16 server kernel: md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for reshape.
Jan 15 22:41:16 server kernel: md: using 128k window, over a total of 3906886144k.
Jan 15 22:41:16 server mdadm[879]: RebuildStarted event detected on md device /dev/md1
Jan 15 22:41:16 server sudo[1612]: pam_unix(sudo:session): session closed for user root
Jan 15 22:41:23 server kernel: md/raid:md1: read error corrected (8 sectors at 759589312 on sdf1)
Jan 15 22:41:23 server kernel: md/raid:md1: read error corrected (8 sectors at 759589320 on sdf1)
Jan 15 22:41:23 server kernel: md/raid:md1: read error corrected (8 sectors at 759589328 on sdf1)
Jan 15 22:41:23 server kernel: md/raid:md1: read error corrected (8 sectors at 759589336 on sdf1)
Jan 15 22:41:23 server kernel: md/raid:md1: read error corrected (8 sectors at 759589344 on sdf1)
Jan 15 22:41:23 server kernel: md/raid:md1: read error corrected (8 sectors at 759589352 on sdf1)
Jan 15 22:41:23 server kernel: md/raid:md1: read error corrected (8 sectors at 759589360 on sdf1)
Jan 15 22:41:23 server kernel: md/raid:md1: read error corrected (8 sectors at 759589368 on sdf1)
Jan 15 22:41:23 server kernel: md/raid:md1: read error corrected (8 sectors at 759589376 on sdf1)
Jan 15 22:41:23 server kernel: md/raid:md1: read error corrected (8 sectors at 759582288 on sdf1)
...
Jan 15 22:43:36 server kernel: INFO: task md1_reshape:1637 blocked for more than 120 seconds.
Jan 15 22:43:36 server kernel:       Not tainted 4.4.0-59-generic #80-Ubuntu
Jan 15 22:43:36 server kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jan 15 22:43:36 server kernel: md1_reshape     D ffff88021028bb68     0  1637      2 0x00000000
Jan 15 22:43:36 server kernel:  ffff88021028bb68 ffff88021028bb80 ffffffff81e11500 ffff88020f5e8e00
Jan 15 22:43:36 server kernel:  ffff88021028c000 ffff8800c6993288 ffff88021028bbe8 ffff88021028bd14
Jan 15 22:43:36 server kernel:  ffff8800c6993000 ffff88021028bb80 ffffffff818343f5 ffff8802144c7000
Jan 15 22:43:36 server kernel: Call Trace:
Jan 15 22:43:36 server kernel:  [<ffffffff818343f5>] schedule+0x35/0x80
Jan 15 22:43:36 server kernel:  [<ffffffffc01d2fec>] reshape_request+0x7fc/0x950 [raid456]
Jan 15 22:43:36 server kernel:  [<ffffffff810c4240>] ? wake_atomic_t_function+0x60/0x60
Jan 15 22:43:36 server kernel:  [<ffffffffc01d346b>] sync_request+0x32b/0x3b0 [raid456]
Jan 15 22:43:36 server kernel:  [<ffffffff81833d46>] ? __schedule+0x3b6/0xa30
Jan 15 22:43:36 server kernel:  [<ffffffff8140c305>] ? find_next_bit+0x15/0x20
Jan 15 22:43:36 server kernel:  [<ffffffff81704c5c>] ? is_mddev_idle+0x9c/0xfa
Jan 15 22:43:36 server kernel:  [<ffffffff816a20fc>] md_do_sync+0x89c/0xe60
Jan 15 22:43:36 server kernel:  [<ffffffff810c4240>] ? wake_atomic_t_function+0x60/0x60
Jan 15 22:43:36 server kernel:  [<ffffffff8169e689>] md_thread+0x139/0x150
Jan 15 22:43:36 server kernel:  [<ffffffff810c4240>] ? wake_atomic_t_function+0x60/0x60
Jan 15 22:43:36 server kernel:  [<ffffffff8169e550>] ? find_pers+0x70/0x70
Jan 15 22:43:36 server kernel:  [<ffffffff810a0c08>] kthread+0xd8/0xf0
Jan 15 22:43:36 server kernel:  [<ffffffff810a0b30>] ? kthread_create_on_node+0x1e0/0x1e0
Jan 15 22:43:36 server kernel:  [<ffffffff8183888f>] ret_from_fork+0x3f/0x70
Jan 15 22:43:36 server kernel:  [<ffffffff810a0b30>] ? kthread_create_on_node+0x1e0/0x1e0
ubuntu
lvm
mdadm
asked on Server Fault Jan 16, 2017 by Ghostrider • edited Jan 16, 2017 by Ghostrider

1 Answer

8

Using LVM for this was indeed a mistake. Not only does it make an unnecessary complicated storage stack for anyone other than its creator, MD arrays are built before LVM arrays, requiring you to manually invoke MD scanning on your LVs that are acting as MD members.

Additionally, avoid the use of kernel device names in persistent configurations (such as sda, sdb, etc). This is especially relevant when naming a volume group, as VGs abstract away underlying storage and can be freely moved across PVs. Kernel device names are also not considered to be permanent, and can change at any time for various reasons. This isn't a problem for LVM PVs (as they're part of a wholesale disk scan and will pick up just about anything), but your VG name will quickly not reflect reality in the situation you've created.

I would recommend that you attempt to gracefully remove the LV from your MD array, and bring it back to a degraded (but sane) state. Be aware that MD on top of LVM isn't something people care about when bug smashing. You're in uncharted territory, and things that you expect to work may fail for no apparent reason.

If this data is critical and not backed up, you way want to defer to someone on-site that knows LVM and MD really really well. I'm assuming you don't have that since you're asking here, so let's have a conversation if you need it. I'll update this with any interesting details if you have to go that route. For now, just try to backpedal by replacing the LVM mess with a plain old disk for a member.

answered on Server Fault Jan 16, 2017 by Spooler

User contributions licensed under CC BY-SA 3.0