How do I setup this server?
August 22, 2009 4:01 PM   Subscribe

I have a Dell 2900 with with space for 8 drives (Perc 5/i). Is there any kind of expandable RAID configuration I can use so I don't have to buy 8, 2TB immediately? Any ideas on what way to do this? I would like something with the equivalent of a RAID5 level protection.

If I bought 8, 2TB hard drives now I'd have 14TB of usable space in RAID5. I would, however, like to expand in stages.

I'd like to allocate something like 1TB (or probably quite a bit less) for application servers (Windows 2k3, to server up my SqueezBox to network music players) and maybe a couple of development servers to test stuff out on. I can afford to lose all this, but not my data which I'd like to run in an OpenFiler type environment and for it to expand out as I need it.

If I want to run servers I'll need some sort of virtualization hyper-visor like ESXi or HyperV, latter of which I've never used. How do these deal with storage and such? How do I want them to deal with storage?

I was thinking about, perhaps, creating to RAID sets and installing ESXi to the first along with any servers I want to run ontop of that, and someone attach the second RAID set to OpenFiler and expand as I need to? Do you see what I'm trying to get at here, is there somehow I can achieve that kind of functionality?

This is all for personal use at home, so I can afford downtime.

Or, I just thought of this, is there a way that OpenFiler* can deal with this sort of expansion and I'd best run ESXi off some sort of flash drive and let it install into OpenFiler storage space?

*Choice of OpenFiler totally random I can deal with other products if they'll manage this better.
posted by geoff. to Computers & Internet (4 answers total) 1 user marked this as a favorite
 
Ah, datacenter in a box. Fun!

Expanding or resizing raid 5 sets on arrays like this usually means destroying and recreating the raid set in its entirety.

If you start with 4 drives, you could set up a 3+1 raid 5 set and then split it up with virtual drives as you like: the card presents those virtual drives to whatever OS or Hypervisor loads the PERC driver.

If you come along later and add 4 more drives, you'll create another raid 5 set and virtual drives in the set.

I think with all the SATA disks and Hypervisor fun you'll be playing with, you're not going to be impressed with your disk throughput.

Bite the bullet, buy all eight drives. Make a 6+1 raid5 set, and keep the 8th drive as a hot spare. Create a 1TB virtual drive for your more precious stuff ( you will end up using it all ) and another two or three drives to use as playgrounds. Anything else and you're asking for trouble later on in the form of 'evolved' complexity biting you in the behind...administratively speaking.

My $0.02 USD, your mileage may vary, for external use only, etc. etc.
posted by HannoverFist at 5:47 PM on August 22, 2009


I haven't ever used the PERC 5/i, but I have a very similar (but older) setup that I use for backups built around a PowerEdge 2300 and a PERC 2/SC. At least with that card, the array is growable. (I had an Ask question about it a while back, actually.)

There are a lot of issues you get into when you talk about growing an array, though. Even if the hardware supports growing the logical array, you still need to increase the partition size in your LVM system (if you use one) and then you need to grow the actual filesystem. If your filesystem doesn't support nondestructive growth, you're hosed. It's kind of a dangerous process and not one that most vendors recommend doing on a production system even if it's possible with their tools.

This thread suggests that there is an option in the Dell management utility to add a drive to an array, but as I suspected it does come with some warnings that it is not guaranteed to work.

Adding the drive to the array is sort of step 1. Then you'd need to increase the partition size on the "drive" (the RAID partition, which the OS sees as a physical drive/device). And then you need to use your filesystem tools to grow the actual fs. Depending on the filesystem type this may need to be done offline (unmount the device, grow it, then remount), although most modern filesystems support some type of online resize. Ext3 and JFS (which I use) both do if you use Linux as your VM host, but I don't know what ESX is capable of. (My suspicion is that it probably has some capability for increasing storage capacity, but exactly how you'd do it I'm not sure.)

Alternately, you can avoid a whole lot of pain if you're willing to forgo hardware RAID. If whatever hypervisor you decide to go with has software RAID capability, then you could just tell the PERC to pass each disk through as its own logical volume and avoid one level of abstraction. Although I'm happy with hardware RAID on my 2300 (because it's only a 2xPIII), I think that in my next home-server iteration I'll probably do software RAID of some sort. The performance advantages of hardware RAID just don't seem to be worth the complexity for most mixed workloads on modern hardware.
posted by Kadin2048 at 9:09 PM on August 22, 2009


but I don't know what ESX is capable of. (My suspicion is that it probably has some capability for increasing storage capacity, but exactly how you'd do it I'm not sure.)

In the ESX3.x world, you'd add an extent to a VMFS datastore -- but try to avoid this as it's harder to manage, it risks availability of VMs if the extent goes offline, and you can't remove an extent without wiping the whole volume it's attached to.

ESX4 allows "volume grow" which is more like an online resize.

Here's lots of good info on storage best practices for ESX.
posted by edverb at 7:24 AM on August 23, 2009


The 2TB drives on Newegg are horrendously slow at 5900RPM. I have two 750GB drives in there already at 7.5k (some odd interval). I'm thinking of installing to raid sets. One RAID 1 on the fast drives for the ESXi and create a RAID5 set for the remaining set. The slower drives, I hope, are still fast enough to stream compressed 1080p video. I'll bite the bullet and purchase all the drives at once, but I'm going to play around with having two raid sets and see if that gives adequate performance or if there ends up being a bottleneck somewhere.

So in summary I'm ditching growing the array and instead focusing on improving performance for virtualization on a separate raid set than my other data. This beast is also incredibly loud, it sounds like a wind tunnel test.
posted by geoff. at 10:02 AM on August 23, 2009


« Older Where can I find these cool and comfy chairs in TO...   |   Can I include companies I was subcontracted to in... Newer »
This thread is closed to new comments.