Pages

Wednesday, June 13, 2012

Softraid with mdadm and Linux - Part II

This is the second part of my mdadm tutorial. This part covers the following topics:

Creating and failing RAID5
Creating and failing RAID6
Converting from RAID1 to RAID10
Converting from RAID01 to RAID10

The first part can be found here.

Creating and failing RAID5

For a RAID5 you need at least 3 disks. To create the files use dd:

# dd if=/dev/zero of=/export/disks/vda bs=1 count=0 seek=134217728
# dd if=/dev/zero of=/export/disks/vdb bs=1 count=0 seek=134217728
# dd if=/dev/zero of=/export/disks/vdc bs=1 count=0 seek=134217728

The four commands above will create three empty files with each 128 MB in size. Next use losetup to attach the files as loop devices:

# losetup /dev/loop0 /export/disks/vda
# losetup /dev/loop1 /export/disks/vdb
# losetup /dev/loop2 /export/disks/vdc

Check that the loop devices are available:

# losetup -a
/dev/loop0: [0811]:262146 (/export/disks/vda)
/dev/loop1: [0811]:262147 (/export/disks/vdb)
/dev/loop2: [0811]:262148 (/export/disks/vdc)


With the three loop devices you can create a RAID5 array now:

# mdadm --create /dev/md0 --level=raid5 --raid-device=3 /dev/loop0 /dev/loop1 /dev/loop2
...
mdadm: array /dev/md0 started.

Check that the softraid device md0 is setup:

# cat /proc/mdstat
...
md0 : active raid5 loop2[3] loop1[1] loop0[0]
      261120 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
...

And check the capacity of /dev/md0

# fdisk -l /dev/md0

Disk /dev/md0: 267 MB, 267386880 bytes
...

This looks good for a RAID5. You can use the device /dev/md0 now to create your parttitions etc. But what happens if a disk fails? To simulate a failure, mark any disk as failed (you need to mark a disk as failed in real environment too before you cann substitute it):

# mdadm /dev/md0 --fail /dev/loop2
mdadm: set /dev/loop2 faulty in /dev/md0
# cat /proc/mdstat
...
md0 : active raid5 loop2[3](F) loop1[1] loop0[0]
      261120 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [UU_]
...

The loop2 device in the output of mdstat is now marekd with (F) flag. Next remove the disk from the array:

# mdadm /dev/md0 --remove /dev/loop2
mdadm: hot removed /dev/loop2 from /dev/md0

Before you can add a new disk, you need to create one more disk:

# dd if=/dev/zero of=/export/disks/vde bs=1 count=0 seek=134217728
# losetup /dev/loop3 /export/disks/vdd
# losetup -a
...
/dev/loop3: [0811]:262150 (/export/disks/vdd)

Finally add the new disk to the array:

# mdadm /dev/md0 --add /dev/loop3
mdadm: added /dev/loop3

If you check mdstat now you'll see the recovery process of the new disk:

# cat /proc/mdstat
...
md0 : active raid5 loop3[3] loop1[1] loop0[0]
      261120 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
      [==========>..........]  recovery = 54.2% (71452/130560) finish=0.0min speed=35726K/sec
...

After a while the recovery is complete and /dev/loop3 is a part of the array:

# cat /proc/mdstat
...
md0 : active raid5 loop3[3] loop1[1] loop0[0]
      261120 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
...

Creating and failing RAID6

For a RAID6 you need at minimum 4 disks. To create the files use dd:

# dd if=/dev/zero of=/export/disks/vda bs=1 count=0 seek=134217728
# dd if=/dev/zero of=/export/disks/vdb bs=1 count=0 seek=134217728
# dd if=/dev/zero of=/export/disks/vdc bs=1 count=0 seek=134217728
# dd if=/dev/zero of=/export/disks/vdd bs=1 count=0 seek=134217728

The four commands above will create four empty files with each 128 MB in size. Next use losetup to attach the files as loop devices:

# losetup /dev/loop0 /export/disks/vda
# losetup /dev/loop1 /export/disks/vdb
# losetup /dev/loop2 /export/disks/vdc
# losetup /dev/loop3 /export/disks/vdd

Check that the loop devices are available:

# losetup -a
/dev/loop0: [0811]:262146 (/export/disks/vda)
/dev/loop1: [0811]:262147 (/export/disks/vdb)
/dev/loop2: [0811]:262148 (/export/disks/vdc)
/dev/loop3: [0811]:262149 (/export/disks/vdd)

With the four loop devices you can create a RAID6 softraid now:

# mdadm --create /dev/md0 --level=raid6 --raid-device=4 /dev/loop0 /dev/loop1 /dev/loop2 /dev/loop3
...
mdadm: array /dev/md0 started.

Check that the softraid device md0 is setup:

# cat /proc/mdstat
...
md0 : active raid6 loop3[3] loop2[2] loop1[1] loop0[0]
      261120 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/4] [UUUU]
...

And check the capacity of /dev/md0

# fdisk -l /dev/md0

Disk /dev/md0: 267 MB, 267386880 bytes
...

This looks good for a RAID6. You can use the device /dev/md0 now to create your parttitions etc. But what happens if a disk fails? To simulate a failure, mark any disk as failed (you need to mark a disk as failed in real environment too before you cann substitute it):

# mdadm /dev/md0 --fail /dev/loop0
mdadm: set /dev/loop0 faulty in /dev/md0
# cat /proc/mdstat
...
md0 : active raid6 loop3[3] loop2[2] loop1[1] loop0[0](F)
      261120 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/3] [_UUU]
...

The loop0 device in the output of mdstat is now marekd with (F) flag. Next remove the disk from the array:

# mdadm /dev/md0 --remove /dev/loop0
mdadm: hot removed /dev/loop0 from /dev/md0

Before you can add a new disk, you need to create one more disk:

# dd if=/dev/zero of=/export/disks/vde bs=1 count=0 seek=134217728
# losetup /dev/loop4 /export/disks/vde
# losetup -a
...
/dev/loop4: [0811]:262150 (/export/disks/vde)

Finally add the new disk to the array:

# mdadm /dev/md0 --add /dev/loop4
mdadm: added /dev/loop4

If you check mdstat now you'll see the recovery process of the new disk:

# cat /proc/mdstat
...
md0 : active raid6 loop4[4] loop3[3] loop2[2] loop1[1]
      261120 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/3] [_UUU]
      [==========>..........]  recovery = 54.2% (71452/130560) finish=0.0min speed=35726K/sec
...

After a while the recovery is complete and /dev/loop4 is a part of the array:

# cat /proc/mdstat
...
md0 : active raid6 loop4[4] loop3[3] loop2[2] loop1[1]
      261120 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/4] [UUUU]
...

Converting from RAID1 to RAID10

Imagine you have the following setup:

# cat /proc/mdstat
...
md0 : active raid1 loop1[1] loop0[0]
      131008 blocks [2/2] [UU]
...

Above you can see a simple RAID1 array containing two disks /dev/loop0 and /dev/loop1. To convert the RAID1 array into a RAID10 array you have to break the mirror first and use on of the disks to create a RAID10 array. Split the existing mirror /dev/md0 and remove one device by failing it:

# mdadm /dev/md0 --fail /dev/loop1 --remove /dev/loop1
mdadm: set /dev/loop1 faulty in /dev/md0
mdadm: hot removed /dev/loop1
# cat /proc/mdstat
...
md0 : active raid1 loop0[0]
      131008 blocks [2/1] [U_]
...

Create a new RAID10 array with /dev/loop1:

# mdadm --create /dev/md1 --level=mirror --raid-devices=2 missing /dev/loop1
...
mdadm: array /dev/md1 started.
# cat /proc/mdstat
...
md1 : active raid10 loop1[1]
      131008 blocks 2 near-copies [2/1] [_U]
     
md0 : active raid1 loop0[0]
      131008 blocks [2/1] [U_]
...

After your new RAID10 array was started create all necessary partitions on it and create the file systems:

# fdisk /dev/md1
...
# mkfs.ext4 /dev/md1p1
...

Mount the new RAID10 array to a temporary place

# mount /dev/md1 /mnt/tmp

And copy your data (assuming /mnt/hd holds the data). It's probably good idea to remount /mnt/hd read-only:

# mount /mnt/hd -o remount,ro
# rsync -av /mnt/tmp /mnt/hd

When all your data is copied then unmount both RAID arrays and mount the new RAID10 to the location of the old RAID1 array:

# umount /mnt/hd
# umount /mnt/tmp
# mount /dev/md1 /mnt/hd

The last thing to do is to destroy the old RAID1 array and add the disk to the new RAID10 array. First stop the RAID1 array:

# mdadm --stop /dev/md0
mdadm: stopped /dev/md0

Then add the disk to the new RAID10 array:

# mdadm /dev/md1 --add /dev/loop0
mdadm: added /dev/loop0

Check /proc/mdstat:

cat /proc/mdstat
...
md1 : active raid10 loop0[0] loop1[1]
      131008 blocks 2 near-copies [2/2] [UU]
...

When the recovery process for the RAID10 array is finished then you have successfully converted a RAID1 array into a RAID10 array.

Converting from RAID01 to RAID10

Imagine you have the following setup:

# cat /proc/mdstat
...
md110 : active raid1 md102[1] md101[0]
      392960 blocks [2/2] [UU]
     
md102 : active raid0 loop5[2] loop4[1] loop3[0]
      393024 blocks 64k chunks
     
md101 : active raid0 loop2[2] loop1[1] loop0[0]
      393024 blocks 64k chunks
...

Above you can see a RAID0 array md101 with three disks (/dev/loop[0-2]) and another RAID0 array md102 with another three disks (/dev/loop[3-5]). Across both RAID0 arrays a RAID1 mirror md110 was created. To convert the RAID01 array into a RAID10 array you have to break the mirror first and destroy one of the RAID0 arrays:

# mdadm /dev/md110 --fail /dev/md101
mdadm: set /dev/md101 faulty in /dev/md110

After the RAID0 array md101 is faulted remove and stop it:

# mdadm /dev/md110 --remove /dev/md101
mdadm: hot removed /dev/md101 from /dev/md110
# mdadm --stop /dev/md101
mdadm: stopped /dev/md101

Check that the RAID1 array md110 has only RAID0 array left:

# cat /proc/mdstat
...
md110 : active raid1 md102[1]
      392960 blocks [2/1] [_U]
     
md102 : active raid0 loop5[2] loop4[1] loop3[0]
      393024 blocks 64k chunks
...

After the RAID0 array md101 has been stopped you can now create a RAID10 array with missing disks:

# mdadm --create /dev/md111 --level=raid10 --raid-devices=6 /dev/loop0 missing /dev/loop1 missing /dev/loop2 missing
...
mdadm: array /dev/md111 started.

Check /proc/mdstat:

# cat /proc/mdstat
...
md111 : active raid10 loop2[4] loop1[2] loop0[0]
      393024 blocks 64K chunks 2 near-copies [6/3] [U_U_U_]
     
md110 : active raid1 md102[1]
      392960 blocks [2/1] [_U]
     
md102 : active raid0 loop5[2] loop4[1] loop3[0]
      393024 blocks 64k chunks
...

After your new RAID10 array was started create all necessary partitions on it and create the file systems:

# fdisk /dev/md111
...
# mkfs.ext4 /dev/md111p1
...

Then mount the new partitions and copy your data etc, (before you start the migration of your data from the old RAID01 array to the new RAID10 array you probably should unmount the file systems or remount them read only):

# mount /dev/md111p1 /mnt/tmp
# mount /mnt/md -o remount,ro
# rsync -av /mnt/md /mnt/tmp
...

When all your data is migrated then unmount the both file systems and mount the new RAID10 array to the old location so your users can work again:

# umount /mnt/md
# umount /mnt/tmp
# mount /dev/md111p1 /mnt/md

At this point all your existing data is migrated from the old RAID01 array to the new RAID10 array. The last thing you need to do is to remove the old RAID01 array with the remaining RAID0 array:

# mdadm --stop /dev/md110
mdadm: stopped /dev/md110
# mdadm --stop /dev/md102
mdadm: stopped /dev/md102

You can add now the three free disks to the new RAID10 array:

# mdadm /dev/md111 --add /dev/loop3
mdadm: added /dev/loop3
# mdadm /dev/md112 --add /dev/loop4
mdadm: added /dev/loop4
# mdadm /dev/md113 --add /dev/loop5
mdadm: added /dev/loop5
# cat /proc/mdstat
...
md111 : active raid10 loop5[6] loop4[3] loop3[1] loop2[4] loop1[2] loop0[0]
      393024 blocks 64K chunks 2 near-copies [6/5] [UUUUU_]
      [===>.................]  recovery = 18.7% (24576/131008) finish=0.0min speed=24576K/sec
...

When the recovery process is finished then you have successfully converted a RAID01 array into a RAID10 array.

No comments:

Post a Comment