An array of choices....
August 17, 2012 7:33 AM Subscribe
Looking for advice from other mefites on how to best use the additional disks I've purchased to expand my current home Linux server. RAID5/6? Native ZFS raidz? Something else? Also curious about use of iSCSI instead of SMB for things like Time Machine, etc.
posted by snuffleupagus to Computers & Internet (4 answers total) 1 user marked this as a favorite
I've acquired four 1TB notebook drives that I've put in a hotswap cage, to be placed in my Linux (Ubuntu 12.04) fileserver & mythbackend. I'd appreciate thoughts on using mdraid RAID5/6 vs. native ZFS on Linux (without ECC memory and other ZFS niceties) and maybe vs. other possible solutions like Greyhole.
The "server" is a somewhat modest affair, based on a GA-EP45T-UD3LR motherboard. Core 2 Quad Q8400 @ 2.66 gHz. (LGA775, P45 platform,S SATAII on Intel ICH10R, DDR3). 4GB of RAM, currently. (The board will accept up to 16GB). This particular system currently has small 20gb SSD for boot/root, a two disk 700Ggb stripe set (two 350gb drives), and a two disk mirror set (two 1tb drives) in mdraid.
The stripe set is given over almost entirely to MythTV as recording space. It generally stays around 80%-90% full, with Myth expiring programs as required to keep it there. A small part of the stripe is reserved as "incoming" space for sshfs mounts, from which files are copied to other partitions/machines and then deleted.
The mirror is used as my primary every-day networked shared storage from my desktops and laptop, for archived files, storage of incremental backups (at least the online copies), etc.
While the 700gb on the mirror seemed like a lot of space when I built it a couple years back, I've grown tired of using as many external USB HDDs as I did then. Plus I find my storage needs have grown faster than expected, in part due to the need to archive digital video project files (which I had not expected) and disk images and the like for testing and tinkering (also not expected).
I have been looking at ZFS for some time but have not pulled the trigger, at first because of being afraid of depending on Solaris and BSD before building up any practical experience with either, and because of hardware compatibility and cost issues. That picture seems to have changed somewhat with the native ZFS Linux port (despite not being in the kernel due to licensing issues).
So, now I find myself wondering if I should go ahead and install ZFS and add these disks as a raidz array, rather than the RAID5 or RAID6 mdraid array I had planned on.
My preference would be to stick to more standard Linux practices on this box, partially because I update it from the Mythbuntu repos and I'm worried something might break if I'm pulling from there and a ZFS PPA. Is that baseless superstition on my part?
I'm also not sure what's better in terms of performance. The box needs to remain responsive enough that Myth recording tasks will not fail due to the storage system monopolizing resources. This shouldn't require much CPU, but I've heard horror stories about network writes to shared mdraid RAID5/6 arrays causing other services to fail as everything grinds to a halt for parity calculation. Your experiences and/or advice with this would be much appreciated. Do I need to worry here?
On the other hand, I' understand that ZFS is RAM hungry, and requires 4GB + 1GB per TB of raidz array just to itself to perform well. That would mean adding another 8GB of memory to the system, which isn't impossible (or all that expensive). However, given that this system doesn't support ECC memory, I'm wondering if I'm better off waiting to do my ZFS experimentation on a system that does, and where I won't be complicating a "production" (at least in my life) server.
Thought on other storage options are also appreciated (such as Greyhole or other managed JBOD setups), although I've not been thrilled with what I've seen so far.
I'm also curious about serving some of the storage over iSCSI, for instance to support Time Machine, and I'd appreciate any thoughts and advice on doing that from Ubuntu 12.04 with mixed (OS X SL and ML and Win 7) clients. Given that the machines that would be using iSCSI are close to each other, and most have second network ports, would it be a good idea to put in a physically separate SAN for iSCSI, so regular network traffic doesn't get in the way?
(I've spent time reading zfs-discuss, sysadmin blogs, and linux forum threads and the like. Discussion in this area seems somewhat polarized between the very basic and rather complex, without much middle ground. I thought an askme might turn up some helpful syntheses based on others' endeavors.)