Help me design a home server.
December 27, 2011 10:11 AM
Help me design a home server for Photoshop work. Reliable, easy to expand, and cheap are the priorities. Lots of sub-questions inside.
I want to build a home server for heavy Photoshop use (1 GB+ file size).
The goals:
- A system with a few TB of storage that is easy to expand in the future
- Reliability is #1 concern - must tolerate a disk failure
- the user will re-save files at all major stopping points, so write performance must be decent
- Power consumption and noise are not important
- The server must run a CrashPlan client to sync all data to the cloud
- I would like for all the storage to appear as one large volume, even after adding new disks
- I will be the tech support, so I need remote access via the internet
- Money is short, $500-800 budget
My tentative plan... Does this sound good?
- Headless Linux box
- One Logical volume of concatenated RAID 1 mirror pairs
- 8+ drive bays, preferably 12, add new mirror pairs as needed
- mdadm software RAID - I've read it's as fast as Intel motherboard RAID
Many questions:
- How do I get lots of drives? Most motherboards seem to have about 6 SATA connectors.
- Should I go with a rack or tower case?
- Is there a way to cache network writes to the server memory, for increased apparent speed to the user?
- Are there any pitfalls to constantly running an SSH server for remote access? How do I deal with the dynamic IP?
- How do I make the server send an alert email when it detects disk failures?
- What must I do to make the server restart everything automatically if it loses power? Is a shell script enough?
- Is there a way to make the Photoshop machine's local disk look like a cache for the server, so writes go to the local disk immediately and are then moved to the server with something like rsync?
I know this is a bulky question... thanks for answers to any part!
I want to build a home server for heavy Photoshop use (1 GB+ file size).
The goals:
- A system with a few TB of storage that is easy to expand in the future
- Reliability is #1 concern - must tolerate a disk failure
- the user will re-save files at all major stopping points, so write performance must be decent
- Power consumption and noise are not important
- The server must run a CrashPlan client to sync all data to the cloud
- I would like for all the storage to appear as one large volume, even after adding new disks
- I will be the tech support, so I need remote access via the internet
- Money is short, $500-800 budget
My tentative plan... Does this sound good?
- Headless Linux box
- One Logical volume of concatenated RAID 1 mirror pairs
- 8+ drive bays, preferably 12, add new mirror pairs as needed
- mdadm software RAID - I've read it's as fast as Intel motherboard RAID
Many questions:
- How do I get lots of drives? Most motherboards seem to have about 6 SATA connectors.
- Should I go with a rack or tower case?
- Is there a way to cache network writes to the server memory, for increased apparent speed to the user?
- Are there any pitfalls to constantly running an SSH server for remote access? How do I deal with the dynamic IP?
- How do I make the server send an alert email when it detects disk failures?
- What must I do to make the server restart everything automatically if it loses power? Is a shell script enough?
- Is there a way to make the Photoshop machine's local disk look like a cache for the server, so writes go to the local disk immediately and are then moved to the server with something like rsync?
I know this is a bulky question... thanks for answers to any part!
How about one of the readynas pro systems from Netgear? It does all of these things, is reasonably priced, and should be very reliable.
posted by iamscott at 10:44 AM on December 27, 2011
posted by iamscott at 10:44 AM on December 27, 2011
Um, yeah, you don't want a server for Photoshop files > 1GB in size, not on a home network. You want those on local disks. Throw a couple of 2 TB drives in your current case (or in an external SATA enclosure) and get a SSD for your Photoshop scratch disk.
posted by kindall at 10:44 AM on December 27, 2011
posted by kindall at 10:44 AM on December 27, 2011
I can't answer all of your questions but I just built something similar a few months ago and let me tell you, you're not getting 8-12 hard drives for $500-800.
I built a 5TB server with 2 drives mirrored, 1 single and last 2 mirrored to be used for backup, transfer and media respectively.
Most important thing to remember is that for 1GB+ photoshop files, you do not want to be trying to save them over the network. You won't get nearly the same network throughput as if you were working with the file locally.
What you're talking about as far as storage is a file server (I built mine with FreeNAS) but photoshop work I would strongly suggest a directly attached hard drive, then backup to the server, Crashplan etc....
Also keep in mind power consumption. 8-12 hard drives spinning on a regular basis adds up to a lot of Kwh and depending on your electricity rate, your electric bill could go up substantially.
Ultimately if you really want to do you'd need server grade hardware. I know you can DIY this with off the shelf components but if you're going to working with 1GB+ files a lot, presumably you're getting paid for the work and you want to be secure in knowing that your data is safe.
Like I said, I built a similar setup and after using it for almost a year, just today I ordered a NAS from QNAP (TS-219PII-US).
Their more professional products seem to have good ratings and that may be a direction to investigate.
Let me know if you have any specific questions about my setup and I'll be happy to give you details.
posted by eatcake at 10:44 AM on December 27, 2011
I built a 5TB server with 2 drives mirrored, 1 single and last 2 mirrored to be used for backup, transfer and media respectively.
Most important thing to remember is that for 1GB+ photoshop files, you do not want to be trying to save them over the network. You won't get nearly the same network throughput as if you were working with the file locally.
What you're talking about as far as storage is a file server (I built mine with FreeNAS) but photoshop work I would strongly suggest a directly attached hard drive, then backup to the server, Crashplan etc....
Also keep in mind power consumption. 8-12 hard drives spinning on a regular basis adds up to a lot of Kwh and depending on your electricity rate, your electric bill could go up substantially.
Ultimately if you really want to do you'd need server grade hardware. I know you can DIY this with off the shelf components but if you're going to working with 1GB+ files a lot, presumably you're getting paid for the work and you want to be secure in knowing that your data is safe.
Like I said, I built a similar setup and after using it for almost a year, just today I ordered a NAS from QNAP (TS-219PII-US).
Their more professional products seem to have good ratings and that may be a direction to investigate.
Let me know if you have any specific questions about my setup and I'll be happy to give you details.
posted by eatcake at 10:44 AM on December 27, 2011
To clarify, the user won't be doing every load/save from the server. Incremental saves will go on the local machine, with a save to the server every hour or two during coffee breaks.
posted by scose at 11:03 AM on December 27, 2011
posted by scose at 11:03 AM on December 27, 2011
I like the general suggestion in a slashdot about home media servers to NOT use raid, but rather redundant copies on separate systems. You also don't want a lot of disks spinning all the time both for electricity costs and to reduce long term failures.
posted by sammyo at 11:20 AM on December 27, 2011
posted by sammyo at 11:20 AM on December 27, 2011
Incremental saves will go on the local machine, with a save to the server every hour or two during coffee breaks.
FWIW, the age-old community wisdom concerning Adobe art flies (Pshop, Illy, etc.) and any network storage is to never save directly over the network. Rather, one should save from the app locally and then move the files off to the server only when done. Never save to a networked volume directly from the app. I actually have seen files end-up corrupt for some unknown reason when saving directly to a network drive. Thus is the magic of Adobe.
posted by Thorzdad at 11:23 AM on December 27, 2011
FWIW, the age-old community wisdom concerning Adobe art flies (Pshop, Illy, etc.) and any network storage is to never save directly over the network. Rather, one should save from the app locally and then move the files off to the server only when done. Never save to a networked volume directly from the app. I actually have seen files end-up corrupt for some unknown reason when saving directly to a network drive. Thus is the magic of Adobe.
posted by Thorzdad at 11:23 AM on December 27, 2011
1/ The mdadm is better than most motherboard psuedo-RAID devices by a large margin; one huge advantage is that if the motherboard dies you can plug the discs into any other motherboard and get your data back immediately, rather tha having to look for another hardware pseudo-RAID controller that supports the same on-disk format.
2/ Don't use RAID 1 and LVM to glue the bits together. Use the RAID10 driver. It will give you much, much better performance, albeit at the cost of less space. If money was no object, I'd be looking for 10K+ RPM drives, although I'd also consider Seagate's SSD/platter hybrids are interesting.
3/ Unfortunately the MD cache driver isn't in the mainline kernels yet, so it's probably not reliable enough to give you software tiering in the storage layber, but keep an eye on it.
4/ LVM is still good for volume management on top of a RAID10 set.
5/ Keep a hot spare in the array to allow instant rebuilds.
6/ You can get motherboards with 10 SATA ports, all Intel chipset as far as I'm aware, and in fact the (very pricey) Asus Z8NA-D6 has 14.
7/ You haven't discussed network hardware as much as you should. In a perfect world you'd have a 10 Gb NIC plugged into a switch with a couple of 10 Gb ports and the workstations on 1 Gb, allowing a number of users to save and load at full speed at once. I'm guessing you don't have that luxury, but you could still look into (for example) a couple of 1 Gb cards in the server with bonding to give you > 1 Gbit/s performance.
Also, network performance is really tied up with NIC quality. A lot of the 1 Gb NICs out there are junk; they can't support jumbo frames properly (e.g. are limited to 4 or 6K). Steer clear of crap like RealTek based adapters, whether on the motherboard or not. Carefully check out the performance of the kernels in your preferred distro with various NICs.
8/ If you're using Samba to do the file sharing, make sure you have a recent version and a good handle on the various performance options. A well-configured Samba will run at near-wire-speed. A badly configured one will make you wonder why you upgraded from 10 Mbit ethernet.
9/ Look at your filesystem options. ext3 is still hands-down the most robust Linux filesystem. ext4 is getting confidence. btrfs is still vapour. XFS may actually be a good choice for the sort of work you-re doing, but most people suggest you really, really need a properly setup UPS for XFS. That's actually the route I'd go for the server you're describing - good UPS, tested auto-shutdown rules, and XFS for the photoshop volumes.
LVM+XFS also makes it easy to make regular, consistent snapshots of the state of the filesystems, so you could take advanatage of that to make e.g. daily pre-upload snapshots of the filesystems, giving you an easy option to rescue people from Terrible Mistakes they've made.
I like the general suggestion in a slashdot about home media servers to NOT use raid, but rather redundant copies on separate systems. You also don't want a lot of disks spinning all the time both for electricity costs and to reduce long term failures.
This is terrible advice.
posted by rodgerd at 12:24 PM on December 27, 2011
2/ Don't use RAID 1 and LVM to glue the bits together. Use the RAID10 driver. It will give you much, much better performance, albeit at the cost of less space. If money was no object, I'd be looking for 10K+ RPM drives, although I'd also consider Seagate's SSD/platter hybrids are interesting.
3/ Unfortunately the MD cache driver isn't in the mainline kernels yet, so it's probably not reliable enough to give you software tiering in the storage layber, but keep an eye on it.
4/ LVM is still good for volume management on top of a RAID10 set.
5/ Keep a hot spare in the array to allow instant rebuilds.
6/ You can get motherboards with 10 SATA ports, all Intel chipset as far as I'm aware, and in fact the (very pricey) Asus Z8NA-D6 has 14.
7/ You haven't discussed network hardware as much as you should. In a perfect world you'd have a 10 Gb NIC plugged into a switch with a couple of 10 Gb ports and the workstations on 1 Gb, allowing a number of users to save and load at full speed at once. I'm guessing you don't have that luxury, but you could still look into (for example) a couple of 1 Gb cards in the server with bonding to give you > 1 Gbit/s performance.
Also, network performance is really tied up with NIC quality. A lot of the 1 Gb NICs out there are junk; they can't support jumbo frames properly (e.g. are limited to 4 or 6K). Steer clear of crap like RealTek based adapters, whether on the motherboard or not. Carefully check out the performance of the kernels in your preferred distro with various NICs.
8/ If you're using Samba to do the file sharing, make sure you have a recent version and a good handle on the various performance options. A well-configured Samba will run at near-wire-speed. A badly configured one will make you wonder why you upgraded from 10 Mbit ethernet.
9/ Look at your filesystem options. ext3 is still hands-down the most robust Linux filesystem. ext4 is getting confidence. btrfs is still vapour. XFS may actually be a good choice for the sort of work you-re doing, but most people suggest you really, really need a properly setup UPS for XFS. That's actually the route I'd go for the server you're describing - good UPS, tested auto-shutdown rules, and XFS for the photoshop volumes.
LVM+XFS also makes it easy to make regular, consistent snapshots of the state of the filesystems, so you could take advanatage of that to make e.g. daily pre-upload snapshots of the filesystems, giving you an easy option to rescue people from Terrible Mistakes they've made.
I like the general suggestion in a slashdot about home media servers to NOT use raid, but rather redundant copies on separate systems. You also don't want a lot of disks spinning all the time both for electricity costs and to reduce long term failures.
This is terrible advice.
posted by rodgerd at 12:24 PM on December 27, 2011
Use one of these.
If you have a team of people, get a networked "server" kind.
If you are a single person, use one attached to the computer. There is no reason to have a "server" other than "I want it to go really slow and have another point of failure."
posted by Threeway Handshake at 12:47 PM on December 27, 2011
If you have a team of people, get a networked "server" kind.
If you are a single person, use one attached to the computer. There is no reason to have a "server" other than "I want it to go really slow and have another point of failure."
posted by Threeway Handshake at 12:47 PM on December 27, 2011
Thorzdad has it right. We even spell it out in a tech note. Saving and opening files from networks drives is likely to cause problems and is unsupported.
Get a big local drive. If you have to share the files, get a big local drive AND a server, and sync the files by copying them to and from the server, rather than trying to save directly over the network.
ObDisclaimer: I work on Photoshop.
posted by DaveP at 1:03 PM on December 27, 2011
Get a big local drive. If you have to share the files, get a big local drive AND a server, and sync the files by copying them to and from the server, rather than trying to save directly over the network.
ObDisclaimer: I work on Photoshop.
posted by DaveP at 1:03 PM on December 27, 2011
Thanks a lot for the answers, everyone! I'll try to avoid a forum-thread type response, but...
- I meant to say that the files would be copied to the server during coffee breaks. I am aware of the perils of saving directly over the network.
- Regarding Drobo: USB 2.0 is not an option because the USB bus is already clogged with printer, scanner, art tablet, ipod, etc. The eSata one costs $799 without drives - out of the price range.
- Regarding dedicated NAS systems: it seems like the cheap ones have mixed reviews at best, and the good ones cost as much as a computer. What's the advantage vs. a server?
- Will the Ethernet ports on a home Wi-Fi router provide enough bandwidth? (the workstation is NOT connected over Wi-Fi, but other computers on the network are.)
posted by scose at 1:33 PM on December 27, 2011
- I meant to say that the files would be copied to the server during coffee breaks. I am aware of the perils of saving directly over the network.
- Regarding Drobo: USB 2.0 is not an option because the USB bus is already clogged with printer, scanner, art tablet, ipod, etc. The eSata one costs $799 without drives - out of the price range.
- Regarding dedicated NAS systems: it seems like the cheap ones have mixed reviews at best, and the good ones cost as much as a computer. What's the advantage vs. a server?
- Will the Ethernet ports on a home Wi-Fi router provide enough bandwidth? (the workstation is NOT connected over Wi-Fi, but other computers on the network are.)
posted by scose at 1:33 PM on December 27, 2011
Will the Ethernet ports on a home Wi-Fi router provide enough bandwidth?
Yes. Unless you are getting something VERY fast, an HD is slower than even a 100mbps ethernet network.
- Regarding dedicated NAS systems: it seems like the cheap ones have mixed reviews at best, and the good ones cost as much as a computer. What's the advantage vs. a server?
The advantage of an appliance is that you don't have to worry about running a "server" with it. It is just a thing that gets plugged in and is basically as hands-off as a refrigerator. (But think of the cheap NAS things as uh... one of those telephones shaped like a shoe you got when you ordered Sports Illustrated in 1993.)
posted by Threeway Handshake at 4:55 PM on December 27, 2011
Yes. Unless you are getting something VERY fast, an HD is slower than even a 100mbps ethernet network.
- Regarding dedicated NAS systems: it seems like the cheap ones have mixed reviews at best, and the good ones cost as much as a computer. What's the advantage vs. a server?
The advantage of an appliance is that you don't have to worry about running a "server" with it. It is just a thing that gets plugged in and is basically as hands-off as a refrigerator. (But think of the cheap NAS things as uh... one of those telephones shaped like a shoe you got when you ordered Sports Illustrated in 1993.)
posted by Threeway Handshake at 4:55 PM on December 27, 2011
What's the advantage vs. a server?
Speaking as an IT guy who's got his eye on this very product segment, I will say that the dedicated devices provide higher levels of performance/support/quality/whatever at a lower price than you can probably manage without a lot of sweat & labor & study or a lot of money. *shrug* They did R&D and know what they are doing so you don't have to. :7)
If you are skilled with operating systems and hardware and software you may be able to meet or beat the reliability and performance of commercial gear at a lower priceā¦but then, you probaby also realize that when you do IT all day, it's the last thing you want to do at home!
posted by wenestvedt at 1:03 PM on January 4, 2012
Speaking as an IT guy who's got his eye on this very product segment, I will say that the dedicated devices provide higher levels of performance/support/quality/whatever at a lower price than you can probably manage without a lot of sweat & labor & study or a lot of money. *shrug* They did R&D and know what they are doing so you don't have to. :7)
If you are skilled with operating systems and hardware and software you may be able to meet or beat the reliability and performance of commercial gear at a lower priceā¦but then, you probaby also realize that when you do IT all day, it's the last thing you want to do at home!
posted by wenestvedt at 1:03 PM on January 4, 2012
This thread is closed to new comments.
posted by tylerkaraszewski at 10:21 AM on December 27, 2011