how realistic is it to use one home server for multiple applications?
June 3, 2016 6:58 PM   Subscribe

I'm wanting to set up a home server that I can envision having multiple roles. Is that a realistic thing to attempt for your above-average technically informed user?

I've run a fair amount of servers, most based around OS X but also Windows and Linux. I'm not a high end geek, more of a technically enabled semi-pro or something along those lines. I can see the need on my home network to do a lot of cool things: vpn into the LAN + active firewall with a wireless and ethernet NIC serving bandwidth from my DSL modem, media server, home automation server, backup and file server, and owncloud server. Maybe other things as well.
Practically speaking I wonder if I'm enabled enough to make such a beast without creating compromises or security flaws that I wouldn't be skilled enough to catch. Then it has to have the horsepower to run 24/7 under this load without being over-utilized.
I use Windows on my desktop because it's fairly easy to run on the user level, yet have an intention to migrate to open source. I see these cool fanless PC's yet they seem to get pretty pricey fairly quickly and I'm a budget minded kind of droid. So I'm leaning toward Linux for my server system.
I'm not asking people to solve this for me as it can be solved in lots of creative ways. I'm just curious if you have set up something similar, perhaps you migrated to a separate firewall Tomato router plus a home server system to keep those platforms separate. Thanks for any tips.
posted by diode to Technology (11 answers total) 2 users marked this as a favorite
 
Well there's something to be said for separating public-facing services from critical infrastructure like backups and your file server. For production use, that would always be the case. But for home use you can do it if you want. But it's probably better to separate out the firewall/vpn aspects to a reflashed home router, and have the file/media server separate.

None of the services you mentioned will be a significant load on any recent x86 machine, either separately or in aggregate. A raspberry pi would not be enough because it has crap for I/O, so you wouldn't want to use it as a file server, but any real computer will be totally fine.
posted by ryanrs at 7:26 PM on June 3, 2016


I probably wouldn't use the server as a router, but sure, this is realistic and quite possible.

I don't really know how to quantify 'your above-average technically informed user', though.
posted by destructive cactus at 7:28 PM on June 3, 2016


Best answer: There are a lot of free virtualization options out there, too. You could run VMWare ESXI, for example, and partition a single physical machine into a collection of virtual servers. This is handy for trying out new OSes and other things that would be disruptive to install on your physical infrastructure.

If you're going to do a decent amount of virtualization though, get plenty of RAM, since the virtual servers will still be using RAM even if they're mostly idle.
posted by ryanrs at 7:30 PM on June 3, 2016 [1 favorite]


Best answer: There are upsides and downsides to this, it is a central point of failure, and any machine that is both a gateway and application server you must take care to make sure you are not opening any ports to the wide internet that are not hardened to attack. Virtualizing as much "applications" as possible into smaller vm's is one way to do this.

I have a single machine acting as gateway, access point, time capsule, photo / file storage, and backup creator. This also syncs to a google drive for geo-redundant storage.

I chose this because I had the hardware lying around, I like having something I can connect a monitor to and watch the boot screen / login as root, and not deal with some tiny odd firmware, while at the same time fit a small SSD for the boot device, and a pair of 3.5" drives for file storage.
posted by nickggully at 9:44 PM on June 3, 2016


Best answer: I host multiple websites, my own email, running MySQL, Postfix, and Apache on one machine and have for a long time. I used to run OpenWRT for wifi but now just use a simple Ethernet AP.

Owncloud (now NextCloud; also check out SyncThing) and Plex or whatever media server you like can fit on any Linux server. Put a pfSense firewall on your own hardware in front of it and you get VPN, caching DNS forwarder, and other fun stuff. If you can use Stack Overflow you can setup all of these things.
posted by rhizome at 2:28 AM on June 4, 2016


i do something like this. most of the applications are not very cpu intensive so you likely don't need to worry about over-loading the machine (years ago i had issues trying to do something similar with a low power processor, but these days with a standard desktop cpu it's not an issue). if you want to run virtual machines you do need a fair amount of memory, though.
posted by andrewcooke at 8:15 AM on June 4, 2016


Best answer: As others have said, this is completely doable, and horsepower is unlikely to be a problem, though you do want to think a little about what you need out of it. (For example, if you're expecting to serve video from it, how much bandwidth does that take, and are your disks and network capable of that? The answer is probably "yes" unless, as ryanrs says, you're experimenting with the raspberry pi end of things.)

Keep good backups, and make sure you have a plan for when it fails.

Keep in mind that these kinds of things are often fun and educational to set up, but less fun to maintain. Some day the hardware or the OS will need a major update, and getting everything up and running again after that may require relearning a bunch of stuff that you haven't thought about in a few years. Also, someday something may break, and if you or somebody else depends some service you're running, then you'll suddenly have to drop everything to figure out what happened and fix it.

So, maybe just be careful what you commit to, and think about how you'll transition back off it if you lose interest in the future.

But, don't let that discourage you, it certainly can be fun if it's what you're into.
posted by bfields at 8:24 AM on June 4, 2016 [1 favorite]


just to follow up on bfields's observation - it helps if you document what you've done. obviously, noone wants to "write documentation", but if you have a blog or similar, making notes there of particular problems you solved not only helps others via google, but is also invaluable when you need to repeat things years down the line.

(i have hit problems, googled for answers, and found help in my own forgotten blog entries via google.... i'm sure i'm not the only one)
posted by andrewcooke at 10:42 AM on June 4, 2016 [1 favorite]


Lots of us do this, it's totally reasonable. But it only makes sense if you like the idea of learning the technologies. If your goal is just to have working file sharing, just use Dropbox. But if you want to play with file sharing technologies and customize it yourself, go for it!

Absolutely use Linux for security. I recommend Ubuntu 16.04 right now. You can start this right now, today, by running a virtual machine on your Windows machine. Any 4 year old PC someone's got lying around will work as dedicated hardware, although if you spend ~$500+ you'll get something much nicer. Or go the other route and see how far you can get just with a Raspberry Pi.

Keep the router separate; it acts as a firewall blocking 95% of the security risks you could engender. The standard consumer NAT firewall is remarkably good at keeping stuff behind it safe, although once you start poking holes through it for services you need to be more careful. (Speaking of which, you can serve the VPN right on the router with the right firmware.)
posted by Nelson at 10:43 AM on June 4, 2016


It's pretty easy to do a lot of things with one server. At one point I actually used my tomato router for screen+irssi for example. You probably don't even need huge amounts of RAM -- my linksys router from like 2004 and D-Link NAS both run Linux on like 64MB of RAM. Many of the features you speak of exist as packages for my NAS, for example. You'd probably need to monitor RAM as you add features, and well, updates are maybe less regular than one might prefer. Personally, I wouldn't bother making one big thing, because I already have the separate things, and I want low power, quiet machines, not one loud thing making lots of noise while I sleep or watch TV.

But the reality is that a lot of this stuff isn't that user intensive for a user base of 1. OpenVPN for 1 is basically a rounding error, and a media server without anyone watching its media is basically no load. A former roommate used his previous PC for many of the things you want to do 10 years ago. For partitioning, you have 3 options:

1. UNIX user partitioning: an openVPN user, Apache user, owncloud user, etc.
2. Docker: uses LXC containers to isolate things, lets you put quotas on things.
3. VMs: More RAM hungry, but you can probably slice them thinner than DO or Linode or Amazon would.

The top is light on ram, light on isolation the bottom is ram heavy, strong isolation. Tradeoff as you desire.
posted by pwnguin at 10:36 PM on June 4, 2016


Response by poster: Thanks for all the great tips, that's some good info to consider.
posted by diode at 2:48 PM on June 5, 2016


« Older brain teasers for kittens   |   Station Question Newer »
This thread is closed to new comments.