Should I use LVM or onboard RAID for redundant storage?
October 19, 2006 10:55 AM   Subscribe

What's the most sensible way to add redundant storage to an existing Linux system using LVM?

I've got a Redhat FC4 system with a single physical 80GB SATA drive. The redhat installation system defaulted to an LVM configuration, which I'm not very familiar with. I'm used to regular old partitions.

Reading up on LVM, though, it sounds pretty flexible. What is the most sensible option for adding an additional drive as redundant storage to this system?

I was planning on buying another 80GB SATA drive, plugging it in, and trying the onboard BIOS RAID utility out. If possible, I was going to create a RAID set with the two drives, and, if possible, let the RAID utility duplicate the first drive onto the blank one.

The above would tie me to the machine's hardware, though (?). Is there a more sensible way to do redundant storage with LVM?

What's the most sensible way to do this that would allow me to easily swap out one of these drives with another if it died?
posted by odinsdream to Computers & Internet (7 answers total)
 
If you do a RAID1, you are not relying on the raid controller if your data files. You simply have 2 mirrored drives. If one dies, keep on running, or, drop another in a rebuild your raid set.

If you're not familiar with LVM, you're best not using it. IIRC its not meant for data redundancy.
posted by mphuie at 12:15 PM on October 19, 2006


Thanks, of course I'd like to become more familiar with it, and would do so before using it myself. I wasn't sure if it was meant for data redundancy, or simply for better organization than plain partitions.
posted by odinsdream at 1:43 PM on October 19, 2006


If it helps at all, this is how I decided to use LVM with two drives on my server. Each drive has 3 partitions

1. A boot partition, mirroed with software RAID 1
2. A swap partition. It's unmirrored, but it should be mirrored.
3. A RAID 1 partition that holds a LVM2 physical volume.

That physical volume is assigned to a volume group. That volume group currently holds 3 logical volumes (one for /, the other for /var, and the third for /home) and has 100GB of freespace.

In the future, I can enlarge any one of the existing logical volumes without a major hassle. I can also create a new logical volume, which is probably what I'm going to end up doing for my photo and video storage. I can also add a new set of disks in a RAID1 configuration (or reshuffle things to a RAID5) and use it to hold a new physical volume, which I can add to my existing volume group, increacing the available storage.

In the past, I migrated my volume group from another set of mirrored disks to the new mirror set and decommissioned the old set.

As for what you could do, one possibility:
0. Back your crucial data up
1. Add a new drive to your system, skip the hardware RAID
2. Create a new linux RAID1 volume using the new disk that is the exact size of your existing LVM physical volume. Force it to start in degraded mode (with just the new disk).
3. Create a new LVM physical volume on the new degraded RAID1 volume.
4. Add the new physical volume to your existing volume group.
5. Remove the old physical volume from the volume group, allowing LVM to migrate data over to the new physical volume.
6. use the newly vacated partition to complete the RAID1 volume you created.

That's what you could do. Should you? That's for you to answer. Consider that you might have to rebuild your system and restore your backup if something goes wrong? On the other hand, it's pretty cool if you can manage to reconfigure your storage so completely while limiting your downtime to that required to install the physical drive.
posted by Good Brain at 3:28 PM on October 19, 2006


Just an FYI that it's unlikely that you'll be able to maintain the data that's on the existing drive and pull off either hardware RAID or software RAID (and you'll be very lucky to pull off stable hardware RAID in linux unless it's a higher end controller that you know is well supported). Typically the first step in building a RAID is blanking the drives and or repartitioning them to be the RAID type (fd vs. 83).

Good Brain's comment is doable, assuming that the drive in question is a data drive and not a primary boot disk, but I'd recommend against it as there's definitely a bit of voodoo involved and a lot of places for something to go wrong.
posted by togdon at 4:27 PM on October 19, 2006


I have experience with LVM, but not RAID. LVM is trivial to add/delete/move space around. The LVM HOWTO is pretty good, and explains all the pieces. The only thing that annoyed me was the several layers of inderection before you finally could mount something.

Again, I have no clue how it interacts with RAID, but the HOWTO should be enough to get you going on the LVM side of things.
posted by cschneid at 4:48 PM on October 19, 2006


Seconding Good Brain, root-on-lvm-on-(software-)raid is a dream to have, and it's really very easy to set up. There are some good howtos on the internet--but it's gotten much less painful with mdadm and lvm2 and recent kernels, so make sure they're not too out of date.

Choose a journaled filesystem that will let you resize online. But don't use XFS if you don't have a UPS. Oh--and I generally put my swap partition right in lvm.
posted by cytherea at 8:43 PM on October 19, 2006


togdon; I'm not sure I understand. My experience with on-motherboard SATA RAID is limited, but on the Windows side of things, I bought two blank drives, booted the system into the RAID utility with CTRL+S, created a single RAID volume, and then when Windows booted up, all it was aware of was a single "drive" that was twice as big as the two physical drives making it up.

I was assuming that Linux would operate similarly - that it would essentially be tricked by the motherboard RAID controller into thinking that the two SATA drives were really a single drive.

So - that was where my fear of being tied to the motherboard's hardware came from.

Perhaps I'm wrong about the behaviour of the onboard RAID - that you'd actually need to load some driver in linux in order to have access to the drives. I understand you'd need to do this with a PCI SATA RAID card, but I thought this would be different.
posted by odinsdream at 10:04 AM on October 20, 2006


« Older Seeking advice for someone rel...   |  The Adventures of Tintin — whe... Newer »
This thread is closed to new comments.