{"id":60,"date":"2004-09-24T02:37:06","date_gmt":"2004-09-23T23:37:06","guid":{"rendered":"http:\/\/void.gr\/kargig\/blog\/?p=60"},"modified":"2004-09-24T02:39:25","modified_gmt":"2004-09-23T23:39:25","slug":"raid5-lvm2-recovery-resize-howto","status":"publish","type":"post","link":"https:\/\/www.void.gr\/kargig\/blog\/2004\/09\/24\/raid5-lvm2-recovery-resize-howto\/","title":{"rendered":"RAID5 + LVM2 + recovery + resize HOWTO"},"content":{"rendered":"<p>I was looking forward to creating a big fileserver with disk crash recovery capabilities. LVM2 with reiserfs partitions couldn&#8217;t do the trick for me. I had 3 200Gb disks &#8220;united&#8221; under a logical volume, and formated them with reiserfs and I want to test what would happen if one disk &#8220;crashed&#8221;. So I created a fake crash..I shut the machine down, pulled the plugs of a disk and rebooted. I managed to see the logical volume using the latest lvm2 sources and the latest version of the device mapper:<\/p>\n<blockquote><p># lvm version<br \/>\nversion  LVM version:     2.00.24 (2004-09-16)<br \/>\nversion  Library version: 1.00.19-ioctl (2004-07-03)<br \/>\nversion  Driver version:  4.1.0\n<\/p><\/blockquote>\n<p>Unfortunately I had no luck in reading the reiserfs partition. The superblock was corrupted and the <i>reiserfsck &#8211;rebuild-sb \/device<\/i> did not work&#8230; Salvation was impossible.<br \/>\nWhile googling the web and trying to find out possible solutions I came up to the wonderful idea of creating a software raid5 array of the 3 disks and have LVM2 on top of the raid. I would lose 1 disk in &#8220;space&#8221;&#8230;but I gained the ability to recover after an error and to be able to add more disks if that was necessary.<\/p>\n<p>Before we continue I must say that it&#8217;s necessary that you HAVE worked before with raid and lvm so some commands are familiar to you. This is NOT a step by step guide&#8230;but more like a draft of how things are done.I am not going to explain every little detail&#8230;man pages and google are always around if you have any questions.<\/p>\n<p>Enough of this&#8230;let&#8217;s start. <\/p>\n<li><b>Initialization<\/b><\/li>\n<p>First of all let&#8217;s say that we got our 3 disks on <i>\/dev\/hde, \/dev\/hdg, \/dev\/hdi<\/i><br \/>\n1) We create 1 partition on each one covering the total space using our favorite disk managment software (fdisk, cfdisk,etc). (btw, drives MUST be IDENTICAL).<br \/>\n2) Then it&#8217;s time to create the <i>\/etc\/raidtab<\/i> file. Our contents should look like:<\/p>\n<blockquote><p>raiddev \/dev\/md0<br \/>\n        raid-level 5<br \/>\n        nr-raid-disks 3<br \/>\n        nr-spare-disks 0<br \/>\n        persistent-superblock 1<br \/>\n        chunk-size 32<br \/>\n        parity-algorithm right-symmetric<br \/>\n        device \/dev\/hde1<br \/>\n        raid-disk 0<br \/>\n        device \/dev\/hdg1<br \/>\n        raid-disk 1<br \/>\n        device \/dev\/hdi1<br \/>\n        raid-disk 2\n<\/p><\/blockquote>\n<p>3) Now let&#8217;s create our array:<\/p>\n<blockquote><p> mkraid \/dev\/md0<\/p><\/blockquote>\n<p>4) It&#8217;s time for LVM2 now&#8230;let&#8217;s edit the <i>\/etc\/lvm\/lvm.conf<\/i> so that we add support for raid devices. My filter line looks like this:<\/p>\n<blockquote><p>    filter =[ &#8220;a|loop|&#8221;, &#8220;a|\/dev\/md0|&#8221;, &#8220;r|.*|&#8221; ]<\/p><\/blockquote>\n<p>5) Start initializing the LVM:<\/p>\n<blockquote><p>pvcreate \/dev\/md0 (you can issue a <i>pvdisplay<\/i> to see if all things are correct)<br \/>\nvgcreate test \/dev\/md0  (you can issue a <i>vgdisplay<\/i> to see if all things are correct)\n<\/p><\/blockquote>\n<p>6) Time to create a small logical volume just for testing:<\/p>\n<blockquote><p>lvcreate -L15000 -nbig test<\/p><\/blockquote>\n<p>(you can issue a <i>lvdisplay<\/i> to see if all things are correct)<br \/>\n7) Now there&#8217;s something that&#8217;s distro-specific. &#8220;Usually&#8221; lvm is started on init script before software raid. But in our case, when a reboot occurs, we want a) start the raid b) start the lvm. I am using gentoo as a distro and gentoo had these things the other way round&#8230;It first started the lvm and then the raid, which resulted in errors during the boot process. This case is easily solved in gentoo by editing <i>\/etc\/init.d\/checkfs<\/i> and moving the part about the LVM below the part about the software raid. The config file is really easy to read so I don&#8217;t think anyone might have a problem on that&#8230;<br \/>\n8) Let&#8217;s test what we&#8217;ve done so far&#8230;Let&#8217;s format that logical volume we&#8217;ve created with ext3.<\/p>\n<blockquote><p>mke2fs -j \/dev\/test\/big<\/p><\/blockquote>\n<p>9) Make an entry inside your <i>\/etc\/fstab<\/i> to point to a place you want to mount that logical volume&#8230;and then issue a:<\/p>\n<blockquote><p>mount \/dev\/test\/big<\/p><\/blockquote>\n<p>10) You are now ready to start copying data onto that volume&#8230;I&#8217;d suggest that you copy 5-10Gb out of the first 15Gb that we&#8217;ve created (remember that -L15000 ?). <\/p>\n<li><b>Now it&#8217;s time to simulate a crash! \ud83d\ude42<\/b><\/li>\n<p>11)  We first stop the raid device (after unmounting it and changing the activation of the logical volume, <i>lvchange -a n \/dev\/test\/big<\/i>):<\/p>\n<blockquote><p>raidstop \/dev\/md0<\/p><\/blockquote>\n<p>12) Let&#8217;s destroy one disk. Open up again your favorite disk managment tool and pick up one disk to destroy&#8230;let&#8217;s say <i>\/dev\/hdi<\/i>. Delete the partition it already has&#8230;and create a new one. All previous data is now lost!<br \/>\n13) If you want to make sure that you are on the right path of destroying everything&#8230;reboot your machine. Upon reboot you should get errors on the software raid and on the LVM not being able to activate the volume group <i>&#8220;test&#8221;<\/i>.<br \/>\n14) Upon the root prompt issue: <\/p>\n<blockquote><p>raidstart \/dev\/md0<\/p><\/blockquote>\n<p>and then do a: <i>cat \/proc\/mdstat<\/i><br \/>\nYou should probably see something similar to this:<\/p>\n<blockquote><p>cat \/proc\/mdstat<br \/>\nPersonalities : [linear] [raid0] [raid1] [raid5] [multipath]<br \/>\nmd0 : active raid5 hdi1[2] hdg1[1] hde1[0]<br \/>\n      390716672 blocks level 5, 32k chunk, algorithm 3 [3\/3] [UUU]<br \/>\n      [========>&#8230;&#8230;&#8230;&#8230;]  resync = 43.9% (85854144\/195358336) finish=115.9min speed=15722K\/sec\n<\/p><\/blockquote>\n<p>15) When that is finished, it will mean that raid5 has rebuilt the array  after recovering from the &#8220;faulty&#8221; disk, that we&#8217;ve created, and the placement of the &#8220;new&#8221; drive. (both destruction and the new disk placement was done on step 12)<br \/>\n16) Issue: <i>vgscan<\/i><br \/>\nIt will make the volume group active again.<\/p>\n<li><b>Resizing the Logical Volume<\/b><\/li>\n<p>17)  Say that you need more space to that logical volume you had created&#8230;15Gb is not that much after all&#8230;<\/p>\n<blockquote><p>lvextend -L100G \/dev\/test\/big<\/p><\/blockquote>\n<p>We&#8217;ve now made that previous 15Gb logical volume to a 100Gb one&#8230;already feels much better&#8230;doesn&#8217;t it ?<br \/>\n18) But that&#8217;s not all, we now need to extend the ext3 partition to cover up all that &#8220;new space&#8221;<\/p>\n<blockquote><p> e2fsck -f \/dev\/test\/big ; resize2f \/dev\/test\/big<\/p><\/blockquote>\n<p>We first check that the partition is ok&#8230;and then resize it to the full extends of the logical volume.<br \/>\n19) We are set! We just need to mount our new partition&#8230;and we now have 100Gb of space! You can now extend that even further or create more logical volumes to satisfy your needs.<\/p>\n<li><b>Extend the raid5 array<\/b><\/li>\n<p>This section is to come in a few days&#8230;stay tuned.<\/p>\n<p>I hope that all the abobe helped you to create a better and more secure fileserver. Comments are much appreciated.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>I was looking forward to creating a big fileserver with disk crash recovery capabilities. LVM2 with reiserfs partitions couldn&#8217;t do the trick for me. I had 3 200Gb disks &#8220;united&#8221; under a logical volume, and formated them with reiserfs and I want to test what would happen if one disk &#8220;crashed&#8221;. So I created a [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"ep_exclude_from_search":false,"footnotes":""},"categories":[1],"tags":[],"class_list":["post-60","post","type-post","status-publish","format-standard","hentry","category-general"],"aioseo_notices":[],"views":9954,"_links":{"self":[{"href":"https:\/\/www.void.gr\/kargig\/blog\/wp-json\/wp\/v2\/posts\/60","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.void.gr\/kargig\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.void.gr\/kargig\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.void.gr\/kargig\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.void.gr\/kargig\/blog\/wp-json\/wp\/v2\/comments?post=60"}],"version-history":[{"count":0,"href":"https:\/\/www.void.gr\/kargig\/blog\/wp-json\/wp\/v2\/posts\/60\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.void.gr\/kargig\/blog\/wp-json\/wp\/v2\/media?parent=60"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.void.gr\/kargig\/blog\/wp-json\/wp\/v2\/categories?post=60"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.void.gr\/kargig\/blog\/wp-json\/wp\/v2\/tags?post=60"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}