System administration for the totally unprepared.
March 20, 2019 9:20 PM   Subscribe

I've suddenly become the system administrator for a computational server. What do I need to do?

I'm a PhD student who mostly does environmental engineering, but I have been dipping my toes into bioinformatics in the last year. A little while ago, my research group acquired a decent Linux workstation (Ubuntu 16.04) that we set up for use as a bioinformatics server. Up until recently I was the only one using it; so, by default, I have the best Unix skills and have been designated as the system administrator. Which... I have no idea how to do.

Our server is only minimally accessible: we can connect to it directly through SSH if we're hardwired in the same lab room, or from anywhere if we use the university's VPN. That said, do I need to do anything else to make the machine more secure? We won't be storing vital information or anything sensitive, but I assume I need to protect it against hacking?

One caveat is that all users really need sudo abilities; it's required to install/run a lot of the software we use.

Note: I saw this question but I think my scenario is much more limited.

Thanks for any help!
posted by Paper rabies to Technology (10 answers total) 6 users marked this as a favorite
 
If your group are comfortable with using ssh keys, disabling password logins makes for more secure sshd in case the firewall rules get dropped. I would look at netstat output and make sure that listening services need to be running, and firewall any running services appropriately. Blocking outbound traffic with exceptions for needed services and software installs could also help.

You might consider a host-based IDS system like ossec for monitoring but it may be overkill.
posted by Radiophonic Oddity at 10:59 PM on March 20, 2019


Best answer: You probably don’t need to do much with a server that’s only accessible via SSH by trusted users. The required sudo access does add uncertainty: what if one of your users installs software that opens up greater access, or creates an account for someone you don’t know? How do you know nobody’s putting vital information there in the future?

You should set up some form of regular rolling backup no matter what and assume that people will eventually clobber each other’s files and programs even if they don’t mean to.

A tool I like for minimally-administered servers is etckeeper to continuously save configuration changes in /etc with a VCS like Git.
posted by migurski at 11:02 PM on March 20, 2019 [3 favorites]


Best answer: You can configure sudo to only allow groups of people to only run certain commands if the things the need sudo for can be limited.

Are you sure that only your lab can SSH? If you can go through your uni's VPN then either it's setup to restrict your users to your lab range or it's configured to let any uni person in.

Your simple protection is `fail2ban` which will block any IP that fails login more than X times.

There is also `logwatch` which will periodically scan the logfiles of the machine and email you odd things.

I have tons, you should just memail me.

I was your uni's network admin and my groups linux admin and I've worked with tons of departmental sciencey single machine types.
posted by zengargoyle at 11:05 PM on March 20, 2019 [3 favorites]


Best answer: How to Secure a Linux Server is a very recent resource that links to a bunch of other guides. I'll take your word about everyone needing root, but I think that makes the ability to rebuild the machine from backups pretty important (awesome sysadmin has some suggested tools; I'd guess you'll want point in time recovery rather than a simple mirror). Beyond that and the Ubuntu docs, I'd suggest setting up subscriptions to a few things that'll teach you little by little, e.g. Apticron (email alerts about package updates; when evaluating whether to upgrade something, some keywords to look out for are CVE and much more importantly RCE), LWN.net (a regular source of free security news and deep dives worth paying for), and the Nixers newsletter (a free weekly roundup of news items, etc.; subscribe here).
posted by Wobbuffet at 11:09 PM on March 20, 2019 [2 favorites]


Best answer: The single most important issue that you need to get on top of before altering anything else is making sure that you have a backup process in place that (a) backs up everything that needs to be backed up, as often as it needs to be backed up and (b) is demonstrably and regularly demonstrated to be capable of restoring whatever needs to be restored in a timely fashion.

This guide and template makes thinking this stuff through properly a fair bit less overwhelming.
posted by flabdablet at 1:00 AM on March 21, 2019 [4 favorites]


Best answer: I'll take your word about everyone needing root

I won't. Only the sysadmin needs root. Everybody else can have what they need to do properly delegated to them by setting up sudo properly rather than just leaving it as the way Ubuntu spells "please".
posted by flabdablet at 1:46 AM on March 21, 2019 [4 favorites]


The bad news: I worked at a large university as a sysadmin for over ten years and did some work with HPC clusters, and ad-hoc servers like yours run by PhD students were a constant cause of problems for us and often a risk to the university. If your server is hacked it can cause significant problems for the university even if it contains no important data.

If your server isn't a secret "shadow IT" server then the best thing you can do is talk to your institution's IT staff and ask them to take a look at it. Hopefully they can give you advice on backups, updates, local firewalls, user management etc, even if they don't have the resources to manage it for you. They might be able to use a network-based firewall to lock down network access to the server. They might offer you a partly-managed VM to replace it.

A well-configured server needs little maintenance - you'll just need to install updates and manage user accounts, and set up some monitoring to detect trouble. If you can, send the log files to another server, or back them up elsewhere, so you can investigate problems. However if you've got a lot of users running sudo there's a greatly increased risk that one of them will cause problems... try to lock down what they can run with sudo - you can limit the commands they can use.

Only old-school sysadmins manage systems as individual servers now - the modern approach is to automate as much as possible, partly because even for a skilled sysadmin it's possible to make mistakes and managing more than a few individual servers is a faff.

The good news is that this is a great opportunity to learn fun, useful skills and even a potential second career path - it's a lot like how I became a sysadmin. I'd guess that about half my colleagues in the IT department were originally PhD students in your position.
posted by BinaryApe at 1:50 AM on March 21, 2019 [4 favorites]


Response by poster: Thanks everyone for the great advice! Looks like I have a lot to learn, but at least I have some clear starting points!

Just to clarify: we set up this server with help from our university's IT staff. They just didn't give me much specific input about how to protect the system.

I also decided for now to lock down root access. Unfortunately, this will be a real pain if my labmates want to install anything outside of conda. This may be key if they want to wade as deeply into the bioinformatics world as I have... but I'm not convinced they want to yet (and at any rate, they don't have the skills yet to avoid doing some serious damage). I may reach out to some bioinformatics circles to see how they manage their sudo access on shared servers.
posted by Paper rabies at 8:44 PM on March 21, 2019


Best answer: If you create a software group, and configure sudo to give members of the software group the ability to run apt and/or aptitude and/or apt-get as root, then any user you add to that group will have the right to install or remove any package they like but they still won't have full root access; so they won't be able to do permanent damage to your backup system, your firewalls and whatnot.

They won't have sufficient rights to set global configuration for the installed software, which will force every user to set their own local configuration if they need something other than the package maintainer's defaults. This should go some way toward preventing accidental messings-up of each other's results.

They will be able to e.g. uninstall your backup and firewall packages entirely, but given that what you're trying to protect against is too-busy-to-think temporary incompetence rather than actual malice, that's a fairly unlikely threat scenario for you.

Sudoers entries to achieve that would look like this (warning, not checked, please do your own due diligence before trusting):
Cmnd_Alias APT = /usr/bin/apt, /usr/bin/apt-get, /usr/bin/apt-cache, /usr/bin/aptitude
 
%software ALL = APT
The ALL in that last line means that no check is done on the hostname before allowing the sudo. Hostname checking is really only useful when a common set of sudoers files gets made available to multiple hosts to set policy for an entire site; for a single server like yours, always using ALL for the hostname should be fine.

Personally I'd put those two lines into their own file as /etc/sudoers.d/software rather than adding them to the main /etc/sudoers file. I've found that fairly consistent adoption of a one-file-per-incremental-change system maintenance policy makes backing out from mistakes a lot less fraught.

It's worth getting well across sudo because it can do quite surprising amounts to make a sysadmin's life easier. The man page is notoriously dense and intimidating and probably best approached by skipping straight to the Examples section first, but there's excellent online documentation. Most of those who write passionate screeds denouncing sudo don't appear to have read much of that.
posted by flabdablet at 10:46 PM on March 21, 2019 [3 favorites]


I have to +1 BinaryApe’s comment. This is a networked asset attached to the university network and as such presents a significant security risk. It is 100% the responsibility of the Uni’s IT department to manage that asset even if they don’t want to.

I recommend going that route. Push back at whoever is telling you that you’re “suddenly” a Linux sysadmin and instead ask for information on the request process to onboard this box to the Uni’s IT department properly.
posted by jay2dadub at 7:39 PM on May 3, 2019


« Older Traveling with computer equipment   |   getting off & feeling bad. Newer »
This thread is closed to new comments.