I need a Terabyte!
March 22, 2006 12:04 PM Subscribe
What's the best way of adding a Terabyte of reliable, contiguous, archive-quality storage to my system? By 'best' I mean Toyota Corolla as opposed to Hummer.
OK, here's the deal. I have about five or six hard drives dangling off my Mac, about 600 Gb or so. At some point in the next year I'll be upgrading to an Intel Mac and I really want to get rid of my ad hoc storage system.
I intend to do what I do at the moment, which is to have a pair of internal system drives which are mirrored every night, so if one physically fails I simply boot from the other and keep working. (I also have two different incremental offsite backups in case of disk corruption).
What I want in addition to this is about (at least) a terabyte of reasonably fast storage. (It doesn't have to be record-breaking). This is mostly used for very large image files (250Mb -- > 1 Gb). Ideally it would be
1. Hardware fault tolerant, so if one disk in the array fails I don't lose any data
2. Use as much commodity hardware as possible
3. Be as scalable as possible by adding commodity hardware
4. Self-monitor for corruption etc
5. Reasonably quiet, although I can relocate it if it needs to be loud
I am quite happy to build something myself as you would build a PC.
What I don't want is what I have at the moment ... a bunch of drives in mismatched firewire casings of odd sizes with no very good filing system filled with random stuff most of which I'm afraid to delete in case I don't have a back up, and which will take out large quantities of important data if it fails.
I have *way* too much data to consider online storage a al Amazon or Gdrive.
A tape backup is a possibility if it's more cost effective than other solutions.
I would imagine I'm talking some kind of RAID array here, but what kind? And what hardware should I be looking at?
OK, here's the deal. I have about five or six hard drives dangling off my Mac, about 600 Gb or so. At some point in the next year I'll be upgrading to an Intel Mac and I really want to get rid of my ad hoc storage system.
I intend to do what I do at the moment, which is to have a pair of internal system drives which are mirrored every night, so if one physically fails I simply boot from the other and keep working. (I also have two different incremental offsite backups in case of disk corruption).
What I want in addition to this is about (at least) a terabyte of reasonably fast storage. (It doesn't have to be record-breaking). This is mostly used for very large image files (250Mb -- > 1 Gb). Ideally it would be
1. Hardware fault tolerant, so if one disk in the array fails I don't lose any data
2. Use as much commodity hardware as possible
3. Be as scalable as possible by adding commodity hardware
4. Self-monitor for corruption etc
5. Reasonably quiet, although I can relocate it if it needs to be loud
I am quite happy to build something myself as you would build a PC.
What I don't want is what I have at the moment ... a bunch of drives in mismatched firewire casings of odd sizes with no very good filing system filled with random stuff most of which I'm afraid to delete in case I don't have a back up, and which will take out large quantities of important data if it fails.
I have *way* too much data to consider online storage a al Amazon or Gdrive.
A tape backup is a possibility if it's more cost effective than other solutions.
I would imagine I'm talking some kind of RAID array here, but what kind? And what hardware should I be looking at?
Via digg http://reviews.cnet.com/4520-10167_7-6282556-1.html?tag=lnav
posted by tiamat at 12:20 PM on March 22, 2006
posted by tiamat at 12:20 PM on March 22, 2006
Best answer: I've been looking into this myself recently. Short of building something, you might find some of these NAS (network attached storage) devices interesting. Yellow Machine, Infrant, Buffalo, and many more. Lots of reviews here.
posted by one at 12:25 PM on March 22, 2006
posted by one at 12:25 PM on March 22, 2006
You absolutely want RAID 5. That's the one where one where one drive in 4 or 5 can fail without losing any data. These days your best bet is 5 300GB ATA100 or SATA drives or so. That would get you 1200GB of space.
The solution I suggest is that you take an old PC, put linux on it and use software RAID. Then connect over the network with NFS or samba. New macs all have gigabit ethernet so speed shouldn't really be a problem.
The PC can be pretty cheap. If all it is doing is file serving for a couple of users, a Pentium 2 would probably suffice. Get whatever is cheap and will run quiet and cool.
Don't muck about with SCSI, it's not worth the price for what you want to do. I recommend Seagate SATA-150 drives for their performance and long warranties.
Pros:
-Cheaper than anything else you'll find.
-Expandable by adding more drives.
-More flexible than any other solution.
-Put it in the closet or attic or where ever is out of the way.
Cons:
-A pain to set up.
-Requires more expertise than a nice commercially packaged box.
-Pretty loud and power hungry.
posted by joegester at 12:29 PM on March 22, 2006
The solution I suggest is that you take an old PC, put linux on it and use software RAID. Then connect over the network with NFS or samba. New macs all have gigabit ethernet so speed shouldn't really be a problem.
The PC can be pretty cheap. If all it is doing is file serving for a couple of users, a Pentium 2 would probably suffice. Get whatever is cheap and will run quiet and cool.
Don't muck about with SCSI, it's not worth the price for what you want to do. I recommend Seagate SATA-150 drives for their performance and long warranties.
Pros:
-Cheaper than anything else you'll find.
-Expandable by adding more drives.
-More flexible than any other solution.
-Put it in the closet or attic or where ever is out of the way.
Cons:
-A pain to set up.
-Requires more expertise than a nice commercially packaged box.
-Pretty loud and power hungry.
posted by joegester at 12:29 PM on March 22, 2006
You could get something like this. I have never used one and so can't comment about whether Yellow Machine is any good, but I feel like there's a market for small 1-2 TB servers
However, if you're comfortable building something, it seems like the easiest solution may be just to get a big tower case, a good RAID controller, and a couple 250GB drives.
posted by pombe at 12:32 PM on March 22, 2006
However, if you're comfortable building something, it seems like the easiest solution may be just to get a big tower case, a good RAID controller, and a couple 250GB drives.
posted by pombe at 12:32 PM on March 22, 2006
I second the RAID + fileserver idea. It's worked for the scientists and their huge files at my former employer. It is indeed a certifiable pain to set up. Hire somebody unless you are into that sort of thing.
posted by By The Grace of God at 12:39 PM on March 22, 2006
posted by By The Grace of God at 12:39 PM on March 22, 2006
Best answer: A few weeks ago I purchased a ReadyNAS NV made by Infrant. It's a NAS box, so it sits on your net, via a gigabit Ethernet connection. It does RAID-0,1,5, all trivially, and has a nice web interface. Underneath it runs Linux, but the enclosure and hotswap drive bays are what makes it awesome.
Before I purchased it, I lurked on their forums to see if people were complaining much. They weren't. I've been recording my own observations of it here.
So far it has given me no problems. I bought it with one 400GB drive and have added one more. I figure in a few weeks I'll buy a third. You can buy it with 4x500 GB drives. Search on Amazon to buy it directly from Infrant.
Disclaimer: I have nothing to do with the company, I'm just a happy customer.
posted by todbot at 12:40 PM on March 22, 2006
Before I purchased it, I lurked on their forums to see if people were complaining much. They weren't. I've been recording my own observations of it here.
So far it has given me no problems. I bought it with one 400GB drive and have added one more. I figure in a few weeks I'll buy a third. You can buy it with 4x500 GB drives. Search on Amazon to buy it directly from Infrant.
Disclaimer: I have nothing to do with the company, I'm just a happy customer.
posted by todbot at 12:40 PM on March 22, 2006
You could try looking through these enclosures at Cool Drives. None of the 4 bay ones do built-in Raid 5. They do offer Raid 0 and Raid 1. Then you'd have storage attached to your computer.
It's probably better to get 1) a full-tower case, stuff it full of drives and do Raid 5 (as others have suggested) or 2) an all in one box like the Yellow Machine Pombe linked to.
Editing over the network can be painful with video, but shouldn't be that bad with photos. As long as you use a 100mbit or 1gbit network, and of course set Photoshop to scratch locally.
Worse case, you get used to copying files over locally to work on them, but include in your workflow archiving them to the network drive. Presumably you've got things staged in different areas anyways.
posted by voidcontext at 12:42 PM on March 22, 2006
It's probably better to get 1) a full-tower case, stuff it full of drives and do Raid 5 (as others have suggested) or 2) an all in one box like the Yellow Machine Pombe linked to.
Editing over the network can be painful with video, but shouldn't be that bad with photos. As long as you use a 100mbit or 1gbit network, and of course set Photoshop to scratch locally.
Worse case, you get used to copying files over locally to work on them, but include in your workflow archiving them to the network drive. Presumably you've got things staged in different areas anyways.
posted by voidcontext at 12:42 PM on March 22, 2006
Oh. And with respect to hardware RAID in your case, I think it's a bad idea for a couple of reasons. The cards are expensive and you don't need the performance benefit.
More importantly, using a hardware accelerator card may tie you to that card. If the card fails, you might have to hunt around to find a compatible card so you can recover your data.
posted by joegester at 12:45 PM on March 22, 2006
More importantly, using a hardware accelerator card may tie you to that card. If the card fails, you might have to hunt around to find a compatible card so you can recover your data.
posted by joegester at 12:45 PM on March 22, 2006
Oh. And with respect to hardware RAID in your case, I think it's a bad idea for a couple of reasons. The cards are expensive and you don't need the performance bene
What? Just about every new motherboard made today comes with a built-in hardware RAID on the third and fourth IDE slots. Even if you don't buy a new motherboard, a brand new RAID controller wont cost you more than 30 bucks. I could buy one on pricewatch.com for 14 bucks, right now. I don't know why you think they're expensive.
Why anyone would want to use software RAID when hardware RAID is so cheap, I have no idea.
posted by SweetJesus at 1:02 PM on March 22, 2006
What? Just about every new motherboard made today comes with a built-in hardware RAID on the third and fourth IDE slots. Even if you don't buy a new motherboard, a brand new RAID controller wont cost you more than 30 bucks. I could buy one on pricewatch.com for 14 bucks, right now. I don't know why you think they're expensive.
Why anyone would want to use software RAID when hardware RAID is so cheap, I have no idea.
posted by SweetJesus at 1:02 PM on March 22, 2006
There's a massive difference between motherboard-supported RAID 0 or 1 and a robust RAID 5 board.
The poster is describing a RAID 5, not a striped or mirrored array.
posted by bshort at 1:05 PM on March 22, 2006
The poster is describing a RAID 5, not a striped or mirrored array.
posted by bshort at 1:05 PM on March 22, 2006
There's a massive difference between motherboard-supported RAID 0 or 1 and a robust RAID 5 board.
I've got a Promise RAID 5 compatible chip on the ASUS motherboard I bought 6 months ago, so I don't think you're correct. Besides, if you're that concerned about RAID performance, spend the extra 40 bucks and buy a Promise FASTTrack card.
I'm of the opinion that software-based RAID sucks. The XP version of software RAID is a clusterfuck, and I don't think Linux's implementation is that much better, either.
posted by SweetJesus at 1:15 PM on March 22, 2006
I've got a Promise RAID 5 compatible chip on the ASUS motherboard I bought 6 months ago, so I don't think you're correct. Besides, if you're that concerned about RAID performance, spend the extra 40 bucks and buy a Promise FASTTrack card.
I'm of the opinion that software-based RAID sucks. The XP version of software RAID is a clusterfuck, and I don't think Linux's implementation is that much better, either.
posted by SweetJesus at 1:15 PM on March 22, 2006
My media is on a server that has a 3ware SATA RAID controller.
It's easy, reliable, and not terribly expensive. It's rebuilt failed drives quite painlessly. I'm a satisfied customer.
posted by I Love Tacos at 1:30 PM on March 22, 2006
It's easy, reliable, and not terribly expensive. It's rebuilt failed drives quite painlessly. I'm a satisfied customer.
posted by I Love Tacos at 1:30 PM on March 22, 2006
RAID10 (and RAID0+1, not recommended) also allows for disk failure with less overhead than RAID5, at the loss of space. They are more feasible to do in software.
posted by kcm at 1:32 PM on March 22, 2006
posted by kcm at 1:32 PM on March 22, 2006
SweetJesus: "I'm of the opinion that software-based RAID sucks. The XP version of software RAID is a clusterfuck, and I don't think Linux's implementation is that much better, either."
Not true at all except for perhaps the XP part. Software RAID is just fine in Linux and other operating systems and is more portable to boot than some hardware solutions. Do you have data to back up your random claims?
posted by kcm at 1:33 PM on March 22, 2006
Not true at all except for perhaps the XP part. Software RAID is just fine in Linux and other operating systems and is more portable to boot than some hardware solutions. Do you have data to back up your random claims?
posted by kcm at 1:33 PM on March 22, 2006
The poster is not describing a RAID 5. What the poster wants is to mirror the system disk (RAID 0) and additional storage which may or may not be raided. A simple hardware RAID for the system disks is cheap and much more desirable than a software RAID. He'll have a much harder time getting a software raid working correctly and reliably on a boot disk, and there is really no benefit. If he does want to RAID 5 the storage, that could be done in software, but definitely not on the system disks.
My advice is either buy a motherboard or cheap card to raid 0 two 100G SATA drives (this is an automatic mirroring raid, so each disk is identical and bootable), and deal with the storage as a completely separate entity. A couple 500 G drives in the box that you regularly back up to a NAS or firewire should be fine. Don't get into tapes. It's just too much headache, and disk is cheap nowadays.
posted by team lowkey at 1:34 PM on March 22, 2006
My advice is either buy a motherboard or cheap card to raid 0 two 100G SATA drives (this is an automatic mirroring raid, so each disk is identical and bootable), and deal with the storage as a completely separate entity. A couple 500 G drives in the box that you regularly back up to a NAS or firewire should be fine. Don't get into tapes. It's just too much headache, and disk is cheap nowadays.
posted by team lowkey at 1:34 PM on March 22, 2006
I've been using Linux software RAID 1 and 5 for a couple years. It works exactly as it should, which is to say completely transparently.
posted by joegester at 1:35 PM on March 22, 2006
posted by joegester at 1:35 PM on March 22, 2006
Shit, I meant RAID 1. I always mess that up.
posted by team lowkey at 1:36 PM on March 22, 2006
posted by team lowkey at 1:36 PM on March 22, 2006
team lowkey: "The poster is not describing a RAID 5. What the poster wants is to mirror the system disk (RAID 0) and additional storage which may or may not be raided. A simple hardware RAID for the system disks is cheap and much more desirable than a software RAID. He'll have a much harder time getting a software raid working correctly and reliably on a boot disk, and there is really no benefit. If he does want to RAID 5 the storage, that could be done in software, but definitely not on the system disks.
My advice is either buy a motherboard or cheap card to raid 0 two 100G SATA drives (this is an automatic mirroring raid, so each disk is identical and bootable), and deal with the storage as a completely separate entity. A couple 500 G drives in the box that you regularly back up to a NAS or firewire should be fine. Don't get into tapes. It's just too much headache, and disk is cheap nowadays."
RAID0 is striping, not mirroring (RAID1). There are benefits to using software RAID, like portability, cost, and simplicity. 500GB disks are not cost effective for a "Toyota" solution. Now you're recommending to stripe a disk set and then back up to NAS or external? That's way overthinking this and killing the cost and simplicity aspects.
Just buy a 1.6TB ReadyNAS, run it in RAID5 (or RAID10 if an option) and be done with it. This thread is terribly inaccurate and not really all that helpful so far, in my opinion.
posted by kcm at 1:39 PM on March 22, 2006
My advice is either buy a motherboard or cheap card to raid 0 two 100G SATA drives (this is an automatic mirroring raid, so each disk is identical and bootable), and deal with the storage as a completely separate entity. A couple 500 G drives in the box that you regularly back up to a NAS or firewire should be fine. Don't get into tapes. It's just too much headache, and disk is cheap nowadays."
RAID0 is striping, not mirroring (RAID1). There are benefits to using software RAID, like portability, cost, and simplicity. 500GB disks are not cost effective for a "Toyota" solution. Now you're recommending to stripe a disk set and then back up to NAS or external? That's way overthinking this and killing the cost and simplicity aspects.
Just buy a 1.6TB ReadyNAS, run it in RAID5 (or RAID10 if an option) and be done with it. This thread is terribly inaccurate and not really all that helpful so far, in my opinion.
posted by kcm at 1:39 PM on March 22, 2006
Response by poster: [original poster replies]
Actually I don't want to RAID the system disk, because my current system of mirroring overnight protects me from my own stupidity if I do something brainless and hose my system (which is not unknown).
I have a BIG Antec enclosure and motherboard which could certainly be pressed into service.
posted by unSane at 1:39 PM on March 22, 2006
Actually I don't want to RAID the system disk, because my current system of mirroring overnight protects me from my own stupidity if I do something brainless and hose my system (which is not unknown).
I have a BIG Antec enclosure and motherboard which could certainly be pressed into service.
posted by unSane at 1:39 PM on March 22, 2006
I've got a Promise RAID 5 compatible chip on the ASUS motherboard I bought 6 months ago, so I don't think you're correct.
You may be able to get RAID 5 built in on the motherboard, but it's still really unusual, and what you were originally describing ("Just about every new motherboard made today comes with a built-in hardware RAID on the third and fourth IDE slots.") is nearly always a RAID 0 or RAID 1 solution.
In fact, a cursory search on NewEgg shows that there's only a dozen AMD-compatible motherboards that offer RAID 5 as an option (out of 300), and a similar ratio in the Intel-compatibles.
posted by bshort at 1:41 PM on March 22, 2006
You may be able to get RAID 5 built in on the motherboard, but it's still really unusual, and what you were originally describing ("Just about every new motherboard made today comes with a built-in hardware RAID on the third and fourth IDE slots.") is nearly always a RAID 0 or RAID 1 solution.
In fact, a cursory search on NewEgg shows that there's only a dozen AMD-compatible motherboards that offer RAID 5 as an option (out of 300), and a similar ratio in the Intel-compatibles.
posted by bshort at 1:41 PM on March 22, 2006
The "RAID" controller on just about 99% of motherboards is simply a shim to get the BIOS to recognize the RAID pair that is then more or less processed in software. That's not actually a bad thing for RAID0 / RAID1, since it's essentially just directing the reads/writes and not dealing with parity overhead of RAID5. It's better than nothing but it's not a 3Ware card.
posted by kcm at 1:44 PM on March 22, 2006
posted by kcm at 1:44 PM on March 22, 2006
Yeah, I already called myself out on my inaccuracy. My new advice is listen to kcm, while a crawl off with my tail between my legs.
posted by team lowkey at 1:50 PM on March 22, 2006
posted by team lowkey at 1:50 PM on March 22, 2006
I seriously want a ReadyNAS, because it's the best. But I don't think I actually need SATA disks, hot swapping etc. Those are for small business ie dozens of users. The cheapest/simplest 1TB storage is probably the Terastation (640GB -> 1TB ->1.6TB). But they use ATA (so what, few users right?) and can do RAID etc. I personally want a NAS to stream wirelessly to my xBox and presently I'm getting enough speeds to get no issues from an external USB on my linux box.
I've also considered building my own out of an xBox, you might be able to find a broken one for 40$, then put 2 hard drives in it with linux. But that's a big mod job and the max is 2 drives so that kind of sucks.
With the tax return these days I'm looking towards stuff like the terastation though!
posted by Napierzaza at 1:59 PM on March 22, 2006
I've also considered building my own out of an xBox, you might be able to find a broken one for 40$, then put 2 hard drives in it with linux. But that's a big mod job and the max is 2 drives so that kind of sucks.
With the tax return these days I'm looking towards stuff like the terastation though!
posted by Napierzaza at 1:59 PM on March 22, 2006
Just out of curiosity, if you get a 3ware card would you still need a driver to get it to work in windows/linux? Would you be able to easily transport the array to another system?
posted by Paris Hilton at 2:18 PM on March 22, 2006
posted by Paris Hilton at 2:18 PM on March 22, 2006
Which ever solution you go with, make sure it has per-disk temperature sensors. I've run a couple of hand-built Linux RAID systems, external firewire enclosures, etc. as big storage systems, and they all have had premature drive failure, probably due to heat.
Thermal design is horrible on most PC cases or external enclosures and when you stick a few disks in them and run them 24/7, you start cooking the disks and they die. If you start thinking towards a TeraStation or other commercial NAS, search the forums for heat-related issues.
Napierzaza, if you don't want hot-swap, Infrant makes the ReadyNAS X6, which is almost $100 cheaper, slightly larger and harder to swap disks and slightly inferior thermal design, but uses the same software/hardware as the ReadyNAS NV.
And to anyone that thinks software RAID-5 sucks, just note that all non-enterprise (and some enterprise) NAS servers out there run Linux and software RAID.
posted by todbot at 2:50 PM on March 22, 2006
Thermal design is horrible on most PC cases or external enclosures and when you stick a few disks in them and run them 24/7, you start cooking the disks and they die. If you start thinking towards a TeraStation or other commercial NAS, search the forums for heat-related issues.
Napierzaza, if you don't want hot-swap, Infrant makes the ReadyNAS X6, which is almost $100 cheaper, slightly larger and harder to swap disks and slightly inferior thermal design, but uses the same software/hardware as the ReadyNAS NV.
And to anyone that thinks software RAID-5 sucks, just note that all non-enterprise (and some enterprise) NAS servers out there run Linux and software RAID.
posted by todbot at 2:50 PM on March 22, 2006
todbot, is that NAS you describe quiet?
lots of confusion about RAID levels in here. it doesnt help that the market kind of ran away with the term and didnt stick to the berkeley definitions...
posted by joeblough at 2:54 PM on March 22, 2006
lots of confusion about RAID levels in here. it doesnt help that the market kind of ran away with the term and didnt stick to the berkeley definitions...
posted by joeblough at 2:54 PM on March 22, 2006
Best answer: FreeNAS looks promising to me, but I haven't tried it. It's a FreeBSD/M0n0wall based distribution that can be installed on a small 16MB compact flash card or usb drive.
The idea is that you can throw it in a big tower with a bunch of cheap drives and set up software RAID (0,1 or 5), user permissions, etc. via a web config interface and you're all set.
posted by Pryde at 3:06 PM on March 22, 2006
The idea is that you can throw it in a big tower with a bunch of cheap drives and set up software RAID (0,1 or 5), user permissions, etc. via a web config interface and you're all set.
posted by Pryde at 3:06 PM on March 22, 2006
I disagree with the folks just saying "use software RAID." I've seen too many people go that route and get absolutely fucked when something actually breaks.
Here's an idea.. go to www.lacie.com and get a 1TB disk. you can get a refurbished 1TB disk for $599. Plug it in and go. No screwing around with Linux crap. Buy two of them if you want. ;)
Or, check out some of their other offerings. They make some nice hardware RAID stuff (hardware RAID is better. I think anyone claiming otherwise is an idiot, but that's my experience.) and Lacie is a decent company.
"And to anyone that thinks software RAID-5 sucks, just note that all non-enterprise (and some enterprise) NAS servers out there run Linux and software RAID."
This is total misinformation. 100% bullshit. A lot of the NAS servers out there actually run Windows XP Embedded. Yes, some run Linux, but many of them do not.
posted by drstein at 3:40 PM on March 22, 2006
Here's an idea.. go to www.lacie.com and get a 1TB disk. you can get a refurbished 1TB disk for $599. Plug it in and go. No screwing around with Linux crap. Buy two of them if you want. ;)
Or, check out some of their other offerings. They make some nice hardware RAID stuff (hardware RAID is better. I think anyone claiming otherwise is an idiot, but that's my experience.) and Lacie is a decent company.
"And to anyone that thinks software RAID-5 sucks, just note that all non-enterprise (and some enterprise) NAS servers out there run Linux and software RAID."
This is total misinformation. 100% bullshit. A lot of the NAS servers out there actually run Windows XP Embedded. Yes, some run Linux, but many of them do not.
posted by drstein at 3:40 PM on March 22, 2006
Software and hardware RAID both have advantages and disadvantages. If you're not familiar with both, your path of least resistance solution is a ready-made NAS box. You will pay a small premium for the convenience, figure $1k/1TB currently for a Entrant/Buffalo/.. thinger.
Be sure to select a redundant scheme (i.e. not RAID0, which is what I think the LaCie drives use) unless you want to run frequent backups to another storage system of similar capacity. Backups are still useful since RAID doesn't protect you from data deletion.
In my professional career, I've found it appropriate to use each as well as weird combinations of both. I've kept many petabytes of data safe and continue to draw a paycheck to do so, ergo I feel safe stating my opinions. :)
I *will* say that software RAID5 is pretty bad, since you're doing a lot of parity calculation overhead on your main CPU, so that should be avoided.
posted by kcm at 4:40 PM on March 22, 2006
Be sure to select a redundant scheme (i.e. not RAID0, which is what I think the LaCie drives use) unless you want to run frequent backups to another storage system of similar capacity. Backups are still useful since RAID doesn't protect you from data deletion.
In my professional career, I've found it appropriate to use each as well as weird combinations of both. I've kept many petabytes of data safe and continue to draw a paycheck to do so, ergo I feel safe stating my opinions. :)
I *will* say that software RAID5 is pretty bad, since you're doing a lot of parity calculation overhead on your main CPU, so that should be avoided.
posted by kcm at 4:40 PM on March 22, 2006
Oh man. So I had my heart set on Streamload as an offsite backup solution, but that ReadyNAS looks sexy, and I could just rsync over Gigabit-E ... but same-site backup is sketchy, right?
posted by lbergstr at 5:09 PM on March 22, 2006
posted by lbergstr at 5:09 PM on March 22, 2006
Off and on I've thought about building a large amount of external storage. I don't know much about Mac, but I'll assume that there's some kind of software RAID available.
The approach I considered was to buy 2 to 4 ATA hard drives in the 200-300GB range (wherever the sweet spot in terms of $/GB is today), some ATA-USB adapters, and a USB hub. Put it all in an old SCSI enclosure, and run one USB cable back to the server machine. For the next 4 spindles, get another enclosure and run another cable back to the server.
USB inherently supports hot-plugging. To remove a disk, first pull the USB connection, then the power; to add a disk, do it in reverse.
USB 2.0 high speed is 480Mbps, while sequential reads on a single 300GB SATA disk (the easiest benchmark for me to gather) go at about 60 MBps. 60 MBps * 8 bits/byte = 480Mbps. So for sequential striped reads, this setup would be limited by the performance of USB. I don't really know how well USB Storage fares in the case of multiple disks on the same bus, so for all I know it's awful.
SCSI External case 4 bay with 300W Power Supply $85
USB 2.0 to IDE Cable $19.99 * 4
USB self-powered hub $7.99
Maxtor 300GB drive $110 * 4
Grand total: $613 + shipping for 1200GB (nominal) storage.
Note: I haven't done this myself, and don't have specific experience with any of the hardware.
While on newegg.com, I noticed this 1GB "NAS" device, $1069.99. It has gigabit ethernet, so in theory its top speed beats USB 2.0; In practice, two reviewers complain that the speed is low, while another says it's fast. If you only have 100 megabit networking (most of us do), this is probably going to be the case. It seems they seell it empty if you want to pick your own drives.
posted by jepler at 5:17 PM on March 22, 2006
The approach I considered was to buy 2 to 4 ATA hard drives in the 200-300GB range (wherever the sweet spot in terms of $/GB is today), some ATA-USB adapters, and a USB hub. Put it all in an old SCSI enclosure, and run one USB cable back to the server machine. For the next 4 spindles, get another enclosure and run another cable back to the server.
USB inherently supports hot-plugging. To remove a disk, first pull the USB connection, then the power; to add a disk, do it in reverse.
USB 2.0 high speed is 480Mbps, while sequential reads on a single 300GB SATA disk (the easiest benchmark for me to gather) go at about 60 MBps. 60 MBps * 8 bits/byte = 480Mbps. So for sequential striped reads, this setup would be limited by the performance of USB. I don't really know how well USB Storage fares in the case of multiple disks on the same bus, so for all I know it's awful.
SCSI External case 4 bay with 300W Power Supply $85
USB 2.0 to IDE Cable $19.99 * 4
USB self-powered hub $7.99
Maxtor 300GB drive $110 * 4
Grand total: $613 + shipping for 1200GB (nominal) storage.
Note: I haven't done this myself, and don't have specific experience with any of the hardware.
While on newegg.com, I noticed this 1GB "NAS" device, $1069.99. It has gigabit ethernet, so in theory its top speed beats USB 2.0; In practice, two reviewers complain that the speed is low, while another says it's fast. If you only have 100 megabit networking (most of us do), this is probably going to be the case. It seems they seell it empty if you want to pick your own drives.
posted by jepler at 5:17 PM on March 22, 2006
Response by poster: [original poster]
Thanks for all the replies.
I am wary of the self-build-PC solution precisely because I have a self-build PC sitting on my workbench with a fried motherboard at the moment.
The NAS boxes are nice but again you seem to be reliant on the hardware RAID and if it fails you have an expensive lump of metal and bunch of unreadable drives.
How about this cheap and cheerful solution? Bear in mind I'm thinking of using this as seldom-accessed backing storage, and can keep my working files on a separate local drive.
I buy 2N firewire cases and fill them with 2N cheap commodity hard drives. I attach these to my old dual G4 and mirror N of them nightly to the other N.
I guess I could RAID 0 or JBOD each N of them separately to make two contiguous blocks of storage.
I basically run the G4 as a fileserver (I have OS X Server which I can install).
If any of the drives fail, I just swap it for another and mirror it back.
I am the only user and I do not need burning streaming performance.
Comments?
posted by unSane at 6:13 PM on March 22, 2006
Thanks for all the replies.
I am wary of the self-build-PC solution precisely because I have a self-build PC sitting on my workbench with a fried motherboard at the moment.
The NAS boxes are nice but again you seem to be reliant on the hardware RAID and if it fails you have an expensive lump of metal and bunch of unreadable drives.
How about this cheap and cheerful solution? Bear in mind I'm thinking of using this as seldom-accessed backing storage, and can keep my working files on a separate local drive.
I buy 2N firewire cases and fill them with 2N cheap commodity hard drives. I attach these to my old dual G4 and mirror N of them nightly to the other N.
I guess I could RAID 0 or JBOD each N of them separately to make two contiguous blocks of storage.
I basically run the G4 as a fileserver (I have OS X Server which I can install).
If any of the drives fail, I just swap it for another and mirror it back.
I am the only user and I do not need burning streaming performance.
Comments?
posted by unSane at 6:13 PM on March 22, 2006
I've never gotten more than 15 megabytes/sec transfer rate over USB2. I got much better transfer speeds over Firewire (even FW400) as it has less protocol overhead than USB.
Then again, right now my external storage is a bare 250G Hitachi SATA drive sitting on my desk attached with one of these USB2-to-SATA cable kits with power supply. I really should put it in some kind of enclosure..
posted by mrbill at 6:40 PM on March 22, 2006
Then again, right now my external storage is a bare 250G Hitachi SATA drive sitting on my desk attached with one of these USB2-to-SATA cable kits with power supply. I really should put it in some kind of enclosure..
posted by mrbill at 6:40 PM on March 22, 2006
drstein, I misspoke and you're kind of missing my point, no need for vulgarity. I meant that virtually all of these NAS boxes run a real OS that is responsible for the RAID calculations and that they're not 'hardware RAID' (i.e. ASIC-based like 3ware cards). Whether or not it's Linux, FreeBSD or Windows Embedded is beside the point I was trying to make. Perhaps my sample set was too small. The first two home-NAS boxes I heard of, Buffalo TeraStation and Infrant ReadyNAS are both Linux-based and these "media storage servers" put out by the likes of D-Link, Netgear, and Linksys seem to be derived from their routers, which all run Linux. (and yes, Linksys has recently been moving to VxWorks)
joublough, the ReadyNAS NV is quiet enough for me to have it in my living room. It's quieter than the two-drive small aluminum firewire enclosure I was using. But the NV has the added benefit of having well-designed cooling and temperature sensors to let me know if a drive is failing. The firewire enclosure killed a drive with no warning.
I used to be a believer in the JBOD approach but no longer for the cooling reasons and because for some reason I couldn't maintain sustained ~1Mbps transfers (e.g. divx playback) without huge jitter. This was across multiple computers and multiple OSs, leading me to believe it was the enclosure's problem. Copying files and other normal use was fine however.
So NAS fileserver for me from now on. I'd rather spend the extra few hundred bucks and not have to worry about that crap anymore.
posted by todbot at 7:29 PM on March 22, 2006
joublough, the ReadyNAS NV is quiet enough for me to have it in my living room. It's quieter than the two-drive small aluminum firewire enclosure I was using. But the NV has the added benefit of having well-designed cooling and temperature sensors to let me know if a drive is failing. The firewire enclosure killed a drive with no warning.
I used to be a believer in the JBOD approach but no longer for the cooling reasons and because for some reason I couldn't maintain sustained ~1Mbps transfers (e.g. divx playback) without huge jitter. This was across multiple computers and multiple OSs, leading me to believe it was the enclosure's problem. Copying files and other normal use was fine however.
So NAS fileserver for me from now on. I'd rather spend the extra few hundred bucks and not have to worry about that crap anymore.
posted by todbot at 7:29 PM on March 22, 2006
mrbill may be right. The only drive I have handy to test with is a 5400rpm drive removed from a laptop. I get 21MB/s for sequential reads, reported by linux "hdparm -tT". Or is this just the low performance of laptop hard disks? For the drive installed in the laptop (different manufacturer, also 5400RPM), I get only 34MB/s compared to over 60MB/s on a desktop machine with a 7200RPM drive.
So go with firewire. The major cost is the drives themselves, so it would only change the "bottom line" slightly.
posted by jepler at 7:44 PM on March 22, 2006
So go with firewire. The major cost is the drives themselves, so it would only change the "bottom line" slightly.
posted by jepler at 7:44 PM on March 22, 2006
USB is, frankly, designed to eat CPU. Intel helped design it that way. Firewire, on the other hand, was designed partially with small consumer devices in mind and is extremely low overhead in comparison.
Remember, though, that you're dealing not just with theoretical bandwidth for these devices, you also have latency and a host of other issues (including CPU usage). Latency and bandwidth are highly co-depdenent. I wouldn't recommend external drives since you have so many other options - including putting four drives inside a case you already have.
posted by kcm at 7:49 PM on March 22, 2006
Remember, though, that you're dealing not just with theoretical bandwidth for these devices, you also have latency and a host of other issues (including CPU usage). Latency and bandwidth are highly co-depdenent. I wouldn't recommend external drives since you have so many other options - including putting four drives inside a case you already have.
posted by kcm at 7:49 PM on March 22, 2006
Best answer: I have quite a bit of experience with doing this stuff relatively cheaply, so I can definitely give you some pointers.
First, Linux software RAID is very, very good. It generally takes a very expensive dedicated card to beat it on performance, and it's completely portable to just about any server architecture. Hardware RAID is desirable from some standpoints, but tends to be very expensive.
3Ware cards, despite being pricey, SUCK at RAID5, even the 'RAID5 optimized' ones. They have Linux support, and they're reliable, but they're incredibly slow in RAID5. And they suck the life out of the system they're in while writing... it will get terribly sluggish and painful to use.
I have a 3Ware 8500-series card in my server, and an Athlon 1900+, on 333Mhz RAM, bottlenecked by PCI, was able to outrun it by a factor of at least four. And the system remained very responsive under load.
3Ware cards are good, though, as JBOD (Just A Bunch of Disks) controllers. They're PCI-X compatible, meaning that the normal PCI bottleneck is bypassed, if they're in a system with PCI-X slots. Normal PCI RAID cards will bottleneck performance. PCI just isn't capable of moving data fast enough to keep an array saturated, along with all the other things the computer has to do. It DOES work, and it can be acceptably fast, but it will go _a lot_ faster if you can move past regular PCI.
If you want to do this on the cheap, however, then standard PCI is the way to go. Things you would need:
Case with room for at least six drives (you can do fewer drives, but it's less efficient). Room for seven drives is even better.
Intel-chipset board (they tend to be more reliable under Linux)
Pentium-something. Make sure to get one that's on an 800Mhz bus, with RAM that's fast enough to match. (DDR-400, generally, on the cheaper boards.)
A 4-port JBOD controller. You can get these very cheap. Note that some motherboards will have enough connectors to work on their own, although you may have to mix SATA and PATA drives. You may need as many as seven connections, depending on how many drives you want to use.
Up to 5 equal-sized drives, for your RAID array. 300gb is about the sweet spot in terms of bytes/dollar. You may need to buy them with different connector styles, depending on how many ports of each type you have in your system.
1 small/cheap drive for your boot/system partition. (It's hard to boot from RAID5.) If you want extra protection, use a second small/cheap drive and mirror it.
A good-quality power supply... at least 400w. PC Power and Cooling makes the best power supplies in the business, but they're expensive.
A spare room to put the beast in... with seven drives, it'll be LOUD.
I wrote up partial instructions on how to get the actual RAID array built and running, but it was getting REALLY long, so I'll just skip past that part. if you end up going this way, post back with a new question and I'll help you through getting things running.
If you want your RAID server to really run _fast_, and your desktop has built-in gigabit ethernet, you might think about going up to a gigabit-capable server. Don't bother with this unless you have native gigabit or are about to get machine that has it. Adding gigabit via PCI will bottleneck on the slot, and you'll likely get no more than twice the performance that you would from regular Fast Ethernet.
For a super fast server, only if your existing desktop has built-in gigabit Ethernet:
A PCI-X compatible motherboard: This one, for instance, has four PCI-X slots and one PCIe slot. This board has four SATA ports, but only one PATA port, which means you'll need to add more ports if you want a 5-drive RAID and a mirrored boot partition. This motherboard saves about $140, and drops SCSI (which you aren't using anyway), faster DDR2 support, and two PCI-X slots. If you weren't worried about expanding, this would be just fine, and quite a bit cheaper.
A socket 775 Pentium...This one would be just fine. Slower would be okay too,but I don't see anything slower with an 800mhz FSB. If you want to get ritzy, you can go with this Pentium D multicore CPU. These are very fast and relatively cheap. They run very cool (and don't need as much fan racket to cool them) because they're some of the first 65nm chips. Intel multicore isn't as fast as AMD, but the bang per buck is great. Multicore is NOT NECESSARY, though.
Some fast RAM...that link is to a 1gb pair, you can put two pairs in this particular board. If you're going with the lower-end board, this memory would save you $15/pair.
A 3Ware controller of whatever type you need... this is kind of expensive. If you can find a JBOD card that does PCI-X or PCIe, and nothing else, that would work fine too. This Highpoint RocketRaid card looks very promising, at $160. (4 ports, PCI-X... since you have only one PCIe port on this board, you don't want to waste it.)
A gigabit switch... you can get these quite cheaply. Dell's 2708 is $90 right now. The 2716 will support Jumbo Frames, but that's quite complex and probably nothing you want to fool with now.
You can probably keep your existing Cat5 network cables.. they'll probably work just fine. You can buy Cat6 if you want. Newegg is not a good source for cables... their upfront price is cheap, but they nail the hell out of you on shipping.
If you can afford the extra cost for the better hardware, the resulting RAID will absolutely SCREAM in comparison to the baseline system. I recently got this set up here, and I can't believe how fast it is... it's almost like having the drive directly attached to the computer. In some cases, I can write data to the server RAID faster than I can write it to the local hard drive....and I have very fast local drives.
Overall, the biggest cost will be storage... the drive cost. If you go for route #1 and you have spare hardware with enough controllers lying around, the drives and a good power supply may be the only real cost. Option #2, since you'll have to buy everything new, is likely to be quite spendy, but the performance is _a lot_ better.
Keep in mind that an Option #2 machine is _really_ beefy, and would easily function as a central server for 25+ people, so it's overkill by most standards. Its biggest drawback is that IDE drives have slow seek time, which would mean it wouldn't perform well in heavy-duty multiuser situations. But, overall, it's still DAMN fast, and incredibly cheap by historical standards. I remember when setting up a machine like that would have run $30,000... you could probably do it now, including a terabyte of storage, for under $2K.
posted by Malor at 9:47 PM on March 22, 2006 [1 favorite]
First, Linux software RAID is very, very good. It generally takes a very expensive dedicated card to beat it on performance, and it's completely portable to just about any server architecture. Hardware RAID is desirable from some standpoints, but tends to be very expensive.
3Ware cards, despite being pricey, SUCK at RAID5, even the 'RAID5 optimized' ones. They have Linux support, and they're reliable, but they're incredibly slow in RAID5. And they suck the life out of the system they're in while writing... it will get terribly sluggish and painful to use.
I have a 3Ware 8500-series card in my server, and an Athlon 1900+, on 333Mhz RAM, bottlenecked by PCI, was able to outrun it by a factor of at least four. And the system remained very responsive under load.
3Ware cards are good, though, as JBOD (Just A Bunch of Disks) controllers. They're PCI-X compatible, meaning that the normal PCI bottleneck is bypassed, if they're in a system with PCI-X slots. Normal PCI RAID cards will bottleneck performance. PCI just isn't capable of moving data fast enough to keep an array saturated, along with all the other things the computer has to do. It DOES work, and it can be acceptably fast, but it will go _a lot_ faster if you can move past regular PCI.
If you want to do this on the cheap, however, then standard PCI is the way to go. Things you would need:
Case with room for at least six drives (you can do fewer drives, but it's less efficient). Room for seven drives is even better.
Intel-chipset board (they tend to be more reliable under Linux)
Pentium-something. Make sure to get one that's on an 800Mhz bus, with RAM that's fast enough to match. (DDR-400, generally, on the cheaper boards.)
A 4-port JBOD controller. You can get these very cheap. Note that some motherboards will have enough connectors to work on their own, although you may have to mix SATA and PATA drives. You may need as many as seven connections, depending on how many drives you want to use.
Up to 5 equal-sized drives, for your RAID array. 300gb is about the sweet spot in terms of bytes/dollar. You may need to buy them with different connector styles, depending on how many ports of each type you have in your system.
1 small/cheap drive for your boot/system partition. (It's hard to boot from RAID5.) If you want extra protection, use a second small/cheap drive and mirror it.
A good-quality power supply... at least 400w. PC Power and Cooling makes the best power supplies in the business, but they're expensive.
A spare room to put the beast in... with seven drives, it'll be LOUD.
I wrote up partial instructions on how to get the actual RAID array built and running, but it was getting REALLY long, so I'll just skip past that part. if you end up going this way, post back with a new question and I'll help you through getting things running.
If you want your RAID server to really run _fast_, and your desktop has built-in gigabit ethernet, you might think about going up to a gigabit-capable server. Don't bother with this unless you have native gigabit or are about to get machine that has it. Adding gigabit via PCI will bottleneck on the slot, and you'll likely get no more than twice the performance that you would from regular Fast Ethernet.
For a super fast server, only if your existing desktop has built-in gigabit Ethernet:
A PCI-X compatible motherboard: This one, for instance, has four PCI-X slots and one PCIe slot. This board has four SATA ports, but only one PATA port, which means you'll need to add more ports if you want a 5-drive RAID and a mirrored boot partition. This motherboard saves about $140, and drops SCSI (which you aren't using anyway), faster DDR2 support, and two PCI-X slots. If you weren't worried about expanding, this would be just fine, and quite a bit cheaper.
A socket 775 Pentium...This one would be just fine. Slower would be okay too,but I don't see anything slower with an 800mhz FSB. If you want to get ritzy, you can go with this Pentium D multicore CPU. These are very fast and relatively cheap. They run very cool (and don't need as much fan racket to cool them) because they're some of the first 65nm chips. Intel multicore isn't as fast as AMD, but the bang per buck is great. Multicore is NOT NECESSARY, though.
Some fast RAM...that link is to a 1gb pair, you can put two pairs in this particular board. If you're going with the lower-end board, this memory would save you $15/pair.
A 3Ware controller of whatever type you need... this is kind of expensive. If you can find a JBOD card that does PCI-X or PCIe, and nothing else, that would work fine too. This Highpoint RocketRaid card looks very promising, at $160. (4 ports, PCI-X... since you have only one PCIe port on this board, you don't want to waste it.)
A gigabit switch... you can get these quite cheaply. Dell's 2708 is $90 right now. The 2716 will support Jumbo Frames, but that's quite complex and probably nothing you want to fool with now.
You can probably keep your existing Cat5 network cables.. they'll probably work just fine. You can buy Cat6 if you want. Newegg is not a good source for cables... their upfront price is cheap, but they nail the hell out of you on shipping.
If you can afford the extra cost for the better hardware, the resulting RAID will absolutely SCREAM in comparison to the baseline system. I recently got this set up here, and I can't believe how fast it is... it's almost like having the drive directly attached to the computer. In some cases, I can write data to the server RAID faster than I can write it to the local hard drive....and I have very fast local drives.
Overall, the biggest cost will be storage... the drive cost. If you go for route #1 and you have spare hardware with enough controllers lying around, the drives and a good power supply may be the only real cost. Option #2, since you'll have to buy everything new, is likely to be quite spendy, but the performance is _a lot_ better.
Keep in mind that an Option #2 machine is _really_ beefy, and would easily function as a central server for 25+ people, so it's overkill by most standards. Its biggest drawback is that IDE drives have slow seek time, which would mean it wouldn't perform well in heavy-duty multiuser situations. But, overall, it's still DAMN fast, and incredibly cheap by historical standards. I remember when setting up a machine like that would have run $30,000... you could probably do it now, including a terabyte of storage, for under $2K.
posted by Malor at 9:47 PM on March 22, 2006 [1 favorite]
Yeah, but jumbo frames are useful if you're making a point to spend like $3k+ on a completely overblown single-user fileserver and Gigabit Ethernet to boot, so why not get the SMC 8505T for $50-70ish and have JF support? Makes a big difference for writes.
posted by kcm at 10:16 PM on March 22, 2006
posted by kcm at 10:16 PM on March 22, 2006
I wasn't talking about $3K at all... the cheaper 'expensive' components I listed were about $710, including the switch. Add case, power supply, CD, and whatever storage you want.... so it'd be cost of drives plus probably $900 if you were starting completely from scratch. Add $550 for five 300 gig drives(giving about 1.2 terabytes of usable storage), fifty bucks for a boot drive, thirty bucks for some case fans, and you're in business... roughly $1550. And that machine would scream.
No, it's not necessary... it's entirely possible he could retrofit an existing PC for the cost of drives, plus possibly a controller, an upgraded power supply, and/or some more case fans... between $550 and $800 or so. I'm not sure what his budget max is, so that's why I mentioned both solutions.
I'm not yet using JF here, and I'm still getting 800mbit throughput after some kernel parameter tweaking. I plan to start experimenting with that in a few days... not sure my firewall or my wireless link will handle talking to internal machines running JFs.
Given the possible interoperability issues, I didn't really want to recommend using them.... but your suggestion of getting a JF-capable switch is probably a good one, though, if 5 ports are enough.
posted by Malor at 1:18 AM on March 23, 2006
No, it's not necessary... it's entirely possible he could retrofit an existing PC for the cost of drives, plus possibly a controller, an upgraded power supply, and/or some more case fans... between $550 and $800 or so. I'm not sure what his budget max is, so that's why I mentioned both solutions.
I'm not yet using JF here, and I'm still getting 800mbit throughput after some kernel parameter tweaking. I plan to start experimenting with that in a few days... not sure my firewall or my wireless link will handle talking to internal machines running JFs.
Given the possible interoperability issues, I didn't really want to recommend using them.... but your suggestion of getting a JF-capable switch is probably a good one, though, if 5 ports are enough.
posted by Malor at 1:18 AM on March 23, 2006
Best answer: "You absolutely want RAID 5"
For my local mass storage I like to stick to multiple RAID-1 arrays. Simplicity is a boon when dealing with failures and things don't go quite to plan; although I have a hardware RAID card, I can still read my disks on any bog-standard SATA controller, and if I end up with a broken array from, say, multiple transient disk failures, recovery is more likely.
"why not get the SMC 8505T for $50-70ish and have JF support?"
SMC do an 8 port version too -- 8508T. I have one of each, they're nice, and you can hook them together if you need more ports.
Personally for my >1TB fileserver I'm running a very nice Supermicro SC742T EATX case with 7x hot-swap SATA and plenty of ventilation, a Tyan K8WE with plenty of memory, an 8 port PCI-X LSI MegaRAID (mainly because of good driver support under FreeBSD) and a 4-way Intel 1000/Pro NIC (a good choice even if you're only going for a £30 single port).
I highly recommend Supermicro -- they're a bit expensive, but no more so than the equivilent Lian-Li, and they're *so* much better designed, especially when it comes to keeping lots of disks cool. The rest is probably a bit overkill just for a fileserver.
posted by Freaky at 3:07 AM on March 23, 2006
For my local mass storage I like to stick to multiple RAID-1 arrays. Simplicity is a boon when dealing with failures and things don't go quite to plan; although I have a hardware RAID card, I can still read my disks on any bog-standard SATA controller, and if I end up with a broken array from, say, multiple transient disk failures, recovery is more likely.
"why not get the SMC 8505T for $50-70ish and have JF support?"
SMC do an 8 port version too -- 8508T. I have one of each, they're nice, and you can hook them together if you need more ports.
Personally for my >1TB fileserver I'm running a very nice Supermicro SC742T EATX case with 7x hot-swap SATA and plenty of ventilation, a Tyan K8WE with plenty of memory, an 8 port PCI-X LSI MegaRAID (mainly because of good driver support under FreeBSD) and a 4-way Intel 1000/Pro NIC (a good choice even if you're only going for a £30 single port).
I highly recommend Supermicro -- they're a bit expensive, but no more so than the equivilent Lian-Li, and they're *so* much better designed, especially when it comes to keeping lots of disks cool. The rest is probably a bit overkill just for a fileserver.
posted by Freaky at 3:07 AM on March 23, 2006
I built a 1TB hardware RAID-5 server in 2002 out of an old Supermicro 2xP3 board, a Promise SXxxxx Raid card, and a bunch of Western Digital drives.
It's still humming along just fine.
posted by meehawl at 5:13 AM on March 23, 2006
It's still humming along just fine.
posted by meehawl at 5:13 AM on March 23, 2006
One thing . . . the embedded linux OS in some of the NAS devices such as the Terastation can only format and read drives in FAT32. You will run into backup problems when you discover that your long file names on the client Mac won't copy until you switch to shorter names, unless you create a diskimage or archive and then back that up.
Also, LaCie drives, in my experience, have an awful failure rate. I/ve never been able to figure out why, since they use standard drive mechanisms, but I have had terrible luck with them, and a perusal of AskMe for "lacie" will reveal other tales of woe.
posted by fourcheesemac at 7:08 AM on March 23, 2006
Also, LaCie drives, in my experience, have an awful failure rate. I/ve never been able to figure out why, since they use standard drive mechanisms, but I have had terrible luck with them, and a perusal of AskMe for "lacie" will reveal other tales of woe.
posted by fourcheesemac at 7:08 AM on March 23, 2006
from what i've read on macintouch, i think the lacie drives have really crappy power supplies. when one of these blows up, it can easily kill a hard disk. this happened to me with a fanless FW800 enclosure. i new it was trouble from the start, but one day i forgot to turn on the external fan i rigged up for it, and an inductor blew in the 5V power supply, taking one of my 250GB disks with it...
posted by joeblough at 8:02 AM on March 23, 2006
posted by joeblough at 8:02 AM on March 23, 2006
Best answer: How To: A Complete Terabyte File Server For About $500
posted by caddis at 6:29 PM on March 27, 2006 [1 favorite]
posted by caddis at 6:29 PM on March 27, 2006 [1 favorite]
This thread is closed to new comments.
posted by Jairus at 12:17 PM on March 22, 2006