I’ll just quickly do this and be down in a minute.

I decided a few weeks back to switch to using LVM ontop of Linux software RAID on my desktop machine, mainly to provide redundancy. I have lots of DVB recordings and digital video from our LUG talks that is impractical to back up but it worth protecting against a dead hard disk. I was using two 300GB SATA disks in an LVM volume group, so I purchased two more SATA disks with a view to running all four disks in a RAID5 array. and came up with the following migration strategy:

1. Backup important stuff.
2. Copy sufficient data off vg “data” to allow data to fit onto /dev/sda (279GB).
3. Remove /dev/sdb from volume group. (Resize FS, shrink lv. Then pvremove.)
4. Create 1 primary partition on /dev/sdb, /dev/sdc, /dev/sdd (which is 5GB short of the maximum disk size)
5. Create RAID5 array (with missing disk) across /dev/sdb1, /dev/sdc1, /dev/sdd1. (Partitions 35768 cylinders = 273.99679184 gigabytes)
6. Set partition type to “fd”
7. Set up /dev/md0 as a LVM PV: sudo mdadm --create /dev/md0 --level=5 --raid-devices=4 /dev/sd[bcd]1 missing
8. Add /dev/md0 into “data” LVM.
9. Move data from /dev/sda to /dev/md0 (pvmove, or by removing /dev/sda “manually”).
10. Remove /dev/sda from VG (pvremove).
11. Create primary partition on /dev/sda as before.
12. Add /dev/sda into RAID array: sudo mdadm /dev/md0 -a /dev/sda1
13. Wait for array to rebuild.

This seemed like a good idea at the time. What actually happened is detailed here. And I owe Hugo Mills thanks for his sage advice during the process.

Be Sociable, Share!
    Pin It

    Comments are closed.