Advice on building a raid 5 server.
December 13, 2005 9:39 AM
I have 6 Western Digital PATA 250GB Hard Drives and I'd like to build a Raid 5 Server. Has anyone done anything similar and have any advice they would like to impart on what components worked (or didn't work) for them?
Promise RAID cards make this simple. As well as adaquate power ensure you have good ventilation in your case, six drives spinning all the time generate a lot of heat. If you haven't got a MB yet I've heard good things about the intel on board raid solutions but I don't think they support more than 4 disks, something to check into though.
Plan and test what your going to do when one drive fails. Play around with disconnecting a drive and replacing it with a different one now when there is no pressure and you don't have a terabyte of data at risk.
posted by Mitheral at 10:14 AM on December 13, 2005
Plan and test what your going to do when one drive fails. Play around with disconnecting a drive and replacing it with a different one now when there is no pressure and you don't have a terabyte of data at risk.
posted by Mitheral at 10:14 AM on December 13, 2005
I set mine up in 2001 with a Promise SX6000 RAID card. No longer available retail, but still working away happily.
My experience with the low-end PATA drives isn't so good though. I have a server case, well-ventilated, and so far 3 out of 6 drives have failed, at roughly 12 month intervals. I hope you have a 3+ year warranty!
Also, the UPS is definitely a necessity. It's also probably a good thing to schedule a drive synchronisation every so often, especially if there's a system crash or unexpected restart.
I also maintain a near-line Firewire box backup, using Win2K3's software RAID. Performance is not noticably different, but that's probably because the SX6000 is definitely bus limited on an old vanilla PCI channel. 6+drives stripe reading will saturate 32-bit 33MHz PCI easily - so if extracting the maximum performance is crucial for you then get a card with a faster bus.
I also have a mid-1990s vintage SCSI RAID with Cheetah 15K RPM drives on the same system and their performance for web serves and database access far exceeds the PATA subsystem. Seeks seem to be quite slow with the Promise system (combination of disks and card) but once a transfer gets going, it is fast. Very suitable for large media files.
When you mount the drive, you have two choices in Windows: mount as a regular letter ("D:") drive, or mount into a directory ala Unix. That way, you can kind of "graft" it onto your C: drive. For some braindead programs, that's convenient.
posted by meehawl at 10:31 AM on December 13, 2005
My experience with the low-end PATA drives isn't so good though. I have a server case, well-ventilated, and so far 3 out of 6 drives have failed, at roughly 12 month intervals. I hope you have a 3+ year warranty!
Also, the UPS is definitely a necessity. It's also probably a good thing to schedule a drive synchronisation every so often, especially if there's a system crash or unexpected restart.
I also maintain a near-line Firewire box backup, using Win2K3's software RAID. Performance is not noticably different, but that's probably because the SX6000 is definitely bus limited on an old vanilla PCI channel. 6+drives stripe reading will saturate 32-bit 33MHz PCI easily - so if extracting the maximum performance is crucial for you then get a card with a faster bus.
I also have a mid-1990s vintage SCSI RAID with Cheetah 15K RPM drives on the same system and their performance for web serves and database access far exceeds the PATA subsystem. Seeks seem to be quite slow with the Promise system (combination of disks and card) but once a transfer gets going, it is fast. Very suitable for large media files.
When you mount the drive, you have two choices in Windows: mount as a regular letter ("D:") drive, or mount into a directory ala Unix. That way, you can kind of "graft" it onto your C: drive. For some braindead programs, that's convenient.
posted by meehawl at 10:31 AM on December 13, 2005
Oh yeah, one more note. About replacing the drives (if necessary). Ensure you label the cords at both ends, and each drive. You do not want to be in the position of playing around with insertion order trying to get your stripe set back!
posted by meehawl at 10:32 AM on December 13, 2005
posted by meehawl at 10:32 AM on December 13, 2005
I don't belive those cheap cards can actualy do RAID 5. Usualy just raid 0 or 1, which waste space or reliability.
posted by delmoi at 10:34 AM on December 13, 2005
posted by delmoi at 10:34 AM on December 13, 2005
Depends on what OS you're going to use. Solaris? FreeBSD? Linux? Windows Server 2003?
I have heard that Promise cards are total crap and very poorly supported. They're cheaper, but should be avoided at all costs. This is what several geek friends have told me. Same with the lower end Adaptec cards.
However, 3ware makes very nice cards that provide actual hardware RAID, unlike the cheaper "RAID" cards that do a lot of it in software.
3ware stuff comes very highly recommended from my geek friends.
I have actually given up on ATA RAID and went with SCSI instead, but since you already have the drives... I'd suggest the 3ware route.
posted by drstein at 10:35 AM on December 13, 2005
I have heard that Promise cards are total crap and very poorly supported. They're cheaper, but should be avoided at all costs. This is what several geek friends have told me. Same with the lower end Adaptec cards.
However, 3ware makes very nice cards that provide actual hardware RAID, unlike the cheaper "RAID" cards that do a lot of it in software.
3ware stuff comes very highly recommended from my geek friends.
I have actually given up on ATA RAID and went with SCSI instead, but since you already have the drives... I'd suggest the 3ware route.
posted by drstein at 10:35 AM on December 13, 2005
Hot spare, and a cold spare on the shelf. I inherited a Promise-based IDE array without any spare, and when one member died, there was no way to replace it with a spindle of identical specifications. Close only counts in horseshoes and handgrenades - definitely not with IDE raid arrays. IDE disks aren't smart enough to make up the difference like SCSI disks can. (That's the reason IDE is cheaper than SCSI, btw - less smarts in the on-board spindle controller)
In my strict professional opinion, anything worth RAIDing is worth putting on SCSI disks. In my more relaxed personal opinion, you can use IDE RAID so long as you understand that it's there to keep you afloat long enough to bail all your data out to an alternate location when something dies. Hopefully that's far enough in the future that TB disks will exist by then.
Since you have enough spindles, you might consider going RAID 1+0, a mirror of stripes. That can withstand 2 simultaneous disk failures, rather than 1, as in the case of R5.
posted by Triode at 10:37 AM on December 13, 2005
In my strict professional opinion, anything worth RAIDing is worth putting on SCSI disks. In my more relaxed personal opinion, you can use IDE RAID so long as you understand that it's there to keep you afloat long enough to bail all your data out to an alternate location when something dies. Hopefully that's far enough in the future that TB disks will exist by then.
Since you have enough spindles, you might consider going RAID 1+0, a mirror of stripes. That can withstand 2 simultaneous disk failures, rather than 1, as in the case of R5.
posted by Triode at 10:37 AM on December 13, 2005
Anyway, here's a Toms Hardware matchup of various RAID boards from this oct. Toms was always helpful when I was totaly into PC stuff in highschool.
posted by delmoi at 10:38 AM on December 13, 2005
posted by delmoi at 10:38 AM on December 13, 2005
Triode: Some of those cards actualy support RAID 6, which can withstand up to two disk failures at once.
posted by delmoi at 10:44 AM on December 13, 2005
posted by delmoi at 10:44 AM on December 13, 2005
Expanding a bit on the idea of professional/personal opinion wrt PATA/SCSI: If it has data on it that represents the livelihood of yourself or a corporation - suck it up and buy SCSI. If it's just for backups of your DVD collection.. cheap is ok. I don't trust PATA disks to keep a roof over my head or food on the table.
posted by Triode at 10:44 AM on December 13, 2005
posted by Triode at 10:44 AM on December 13, 2005
Neat! I hadn't seen Double Parity. Thanks, Delmoi. Does it have a double penalty for write XORs, or is it single-hit like R5, written twice?
posted by Triode at 10:47 AM on December 13, 2005
posted by Triode at 10:47 AM on December 13, 2005
I don't belive those cheap cards can actualy do RAID 5.
Promise make a bunch of low-end cards, but it also makes high-end hardware-assisted RAID-5 and 10 cards and some fancy, pricey enclosures.
posted by meehawl at 11:22 AM on December 13, 2005
Promise make a bunch of low-end cards, but it also makes high-end hardware-assisted RAID-5 and 10 cards and some fancy, pricey enclosures.
posted by meehawl at 11:22 AM on December 13, 2005
I've heard only good things about 3Ware.
At one point I had 4 160GB drives from various manufacturers in a RAID5 array using EVMS (a volume management suite) on Linux. The performace was good (easily filled up my lowely 100Mbit network, and this was on a P2-450. It ran well for over a year. I then moved and one of the drives crashed, and I didnt have the machine on the UPS and the power died, then the whole array went bad.
I've lost plenty of drives in the past few years, mostly due to heat issues. Never lost any drives in the RAID to heat, but I had a hard drive cooler on each drive.
Moral of the story: Software RAID sucks, but if you're careful (I wasn't) is suitable for non-mission critical data. And DEFINITELY have a UPS.
posted by kableh at 11:57 AM on December 13, 2005
At one point I had 4 160GB drives from various manufacturers in a RAID5 array using EVMS (a volume management suite) on Linux. The performace was good (easily filled up my lowely 100Mbit network, and this was on a P2-450. It ran well for over a year. I then moved and one of the drives crashed, and I didnt have the machine on the UPS and the power died, then the whole array went bad.
I've lost plenty of drives in the past few years, mostly due to heat issues. Never lost any drives in the RAID to heat, but I had a hard drive cooler on each drive.
Moral of the story: Software RAID sucks, but if you're careful (I wasn't) is suitable for non-mission critical data. And DEFINITELY have a UPS.
posted by kableh at 11:57 AM on December 13, 2005
There is a pretty good argument for software raid managed by the OS. If your hardware fails - a very rare event to be sure - you can always just grab any old PC and install the OS. Any PC that you might consider for a dedicated file server should have plenty of power to keep the performance respectable. I have even read some articles that indicate that OS managed RAID-5 performs better than many/all available hardware solutions - that is bound to change from month to month though so even if I could find it the information would be out of date.
In NT 4 you create a floppy that has your drive configuration on it. It is that floppy that tells a new install how to find the existing redundant array on hard drives. I'm not sure if the restore floppy is actually required, but it is easy, so...
I had an NT 4 based redundant array for a long time and it worked fine. I even had a disk fail in that array and reconstruction was simple. I killed that arrangement when I upgraded my drives this summer. I only have two SATA ports, and they are on a promise RAID 0/1 card, so replicating my old arrangement didn't seem to make sense.
posted by Chuckles at 12:47 PM on December 13, 2005
In NT 4 you create a floppy that has your drive configuration on it. It is that floppy that tells a new install how to find the existing redundant array on hard drives. I'm not sure if the restore floppy is actually required, but it is easy, so...
I had an NT 4 based redundant array for a long time and it worked fine. I even had a disk fail in that array and reconstruction was simple. I killed that arrangement when I upgraded my drives this summer. I only have two SATA ports, and they are on a promise RAID 0/1 card, so replicating my old arrangement didn't seem to make sense.
posted by Chuckles at 12:47 PM on December 13, 2005
I recently picked up a Promise sata raid 5 card (can't think of the model # atm). It runs pretty sweet with 4x250gb under Windows Server 2003.
Also, I want a leechslot :P
posted by starscream at 4:14 PM on December 13, 2005
Also, I want a leechslot :P
posted by starscream at 4:14 PM on December 13, 2005
software raid is as good as or better than, the low-end "hardware" raid cards. 3ware is the man, but it does not come cheap. lsi also makes some very solid hardware raid cards, and fairly more affordable than 3ware. promise and highpoint are utter crap in the lowend, and if you want highend then lsi is equally affordable and much better quality.
right now I have 4 200gb pata wd drives in a software raid5 using md (useful things like lvm or evms can run on top of that). for a while I had the drives running on multiple cheap silicon-image pata cards. currently each drive is now external with its own cheap pata-to-usb2.0 adapter, and they are all plugged into a usb2.0 hub. I copied the raidtab and mdadm.conf to yet another computer, and when I plug in the usb hub the raid builds and mounts itself nicely. nothing needed to be reconfigured, and the internal raid is now an external one.
kableh -- it sounds like you had drive issues not software issues; so what was wrong with the software side?
Triode -- I still think you can judge on warranty... sure, the current pata drives with 1 year warranty are not to be trusted. but a nice sata or even pata drive with a 5 year warranty, I will trust. while I used to only buy scsi, I must admit that ata has slowly but surely getting better.
posted by dorian at 5:35 PM on December 13, 2005
right now I have 4 200gb pata wd drives in a software raid5 using md (useful things like lvm or evms can run on top of that). for a while I had the drives running on multiple cheap silicon-image pata cards. currently each drive is now external with its own cheap pata-to-usb2.0 adapter, and they are all plugged into a usb2.0 hub. I copied the raidtab and mdadm.conf to yet another computer, and when I plug in the usb hub the raid builds and mounts itself nicely. nothing needed to be reconfigured, and the internal raid is now an external one.
kableh -- it sounds like you had drive issues not software issues; so what was wrong with the software side?
Triode -- I still think you can judge on warranty... sure, the current pata drives with 1 year warranty are not to be trusted. but a nice sata or even pata drive with a 5 year warranty, I will trust. while I used to only buy scsi, I must admit that ata has slowly but surely getting better.
posted by dorian at 5:35 PM on December 13, 2005
This thread is closed to new comments.
posted by gregariousrecluse at 9:48 AM on December 13, 2005