Skip

How to restrict shell access?
October 19, 2006 8:42 AM   Subscribe

SSHFilter: How to restrict commands available to users who connect to a Linux box for file transfer, presumably via SSH?

I am in the process of moving our minor (less than 100 sites) hosting from a Windows box to a Linux (Red Hat Enterprise Linux 4) box. I'm decent with Windows and had the system fairly well locked-down, with each user sandboxed to their own little portion of the disk. I'm sure the limited number of commands available to FTP users had something to do with the lack of cracking I experienced.

However, as we move from ftp, with its insecure cleartext password transmission, to ssh, I have a few concerns about my less-than-trustworthy users. They are less than conscientious and are certainly not security-minded. I've seen professional web development companies ask me for world-writable, world-executable directories without batting an eye; these users act as if it's their own personal Linux box and if they have to hose it and start over every so often, who cares? No, they cannot be turned away. Yes, everyone will sign a security policy, but then will completely ignore it.

Problem Constraints:

1) I want users, as they connect, to land in their portion of the directory tree (that is, their website), and to be locked down into their little directories, without a way to cd out of their portion of the directory tree, or even look at files elsewhere.

2) I know that the graphical SSH clients are basically drag-and-drop for files (although at least one I know of has chmod built-in, but they are still executing the equivalent of a handful of commands. How do I lock users down to a handful of commands? If they ssh in to a shell? I cannot really trust them with chmod, chown, or chgrp. I'd rather just consider what commands I should allow, rather than looking for a list of things to deny. I wouldn't mind them using passwd, but outside of that and a small group of other commands ...

3) I have to structure this around groups - there can be up to four people working on a single site, although a typical user is solo.

4) To make matters even worse, some users I will probably have to elevate to positions of greater trust, allowing them access to a broader range of commands, to mysql, and so forth.

5) Password transmission must be encrypted.

rrsh looks interesting, but also experimental, so I am not sure I can "sell" this idea. Or should I use some kind of secure FTP server like vsftpd and only grant ssh to a handful of trusted users?

Has anyone else done this in what's a hostile environment? What are the pros and cons of your approach? Note: I'm a complete Linux n00b. While I can find a lot of different things in Google, I seem to be getting a great deal of opinions without the reasoning behind it, which makes it difficult to understand why people choose one thing over another.
posted by adipocere to Computers & Internet (13 answers total) 1 user marked this as a favorite
 
What a brain fart I just had hehe. It's been a while.

As far as #1/2 it looks like chrooting them to a particular home w/their own bin subdirectory containing all the ssh and regular binararies they need for the job should do it w/of course their home defined in etc/profile and their own bin directory in their path of course.

4 kind of sounds like your trying to go beyond user/group permissions but maybe you can just place more or less binaries in their bin directories for these users.

5 ssh takes care of.

my $.02
posted by prodevel at 8:59 AM on October 19, 2006


I've used chrsh for this, but with a very limited number of users that I'm sure I could've trusted with a normal shell.
posted by duckstab at 8:59 AM on October 19, 2006


I don't think that rssh is experimental. In fact, I'm pretty sure that the IT people where I work used it (they wanted a way for people to upload files without running computational simulations on the web server).

As for keeping people out of others' directories, that's what chmod is for ---remove read, write, and execute priviledges for everyone but the user you want to have it (without execute priveleges for a directory, users can't even enter it). If only user bob has acess to /home/bob then he's safe.
posted by Humanzee at 9:02 AM on October 19, 2006


What you want is a chroot jail. Basically, the user's shell 'thinks' that it's / (root directory) is whatever you set it to, say /home/user. Short of possible security flaws, they can't get out of it. (The downside is that you have to copy over whatever commands you want them to be able to use... This does, however, let you control exactly what they can use.)

It doesn't sound like you want them having shell access at all. I use WinSCP for file uploads from Windows to my Linux server. In theory, you could set their login shell to something like /bin/false, and they won't be able to get a shell. (However, I find that chmod is a pretty useful tool!)

Any scp/ssh/sftp stuff encryptes password transmission.
posted by fogster at 9:03 AM on October 19, 2006




Also, note that if you're a "complete Linux n00b," I'm concerned that you're maintaining a Linux server with 100 users on it. It does take a bit of getting used to.

I found that cPanel (non-free) is a pretty easy-to-use 'control panel' for a web-hosting setup, and if you set people's shells to 'jailshell,' it basically set up the chroot jail for them.

I seem to be getting a great deal of opinions without the reasoning behind it

Welcome to the world of Linux.
posted by fogster at 9:06 AM on October 19, 2006


Oh, and if you want to allow SSH for some users (the ones you want to trust to run mysql, etc) but not others,
Evidently its easy.
posted by Humanzee at 9:07 AM on October 19, 2006


I do this using chrooted SSH. There are pre-patched openssh-chroot tarballs here - you will probably need to upgrade openssh and zlib as well. Then, you can chroot each user into their own jail, or make one jail for each group that has only the binaries they should be able to use. Then you can move the directories they should be able to access into their jail and symlink them from their "real" locations (you can't do it the other way around!).

Building the jail isn't easy though - you have to statically build all the binaries, or strace all the library calls. I have step-by-step instructions and a pre-built jail listing for RedHat - e-mail me if you want to try this and I can save you a lot of time.
posted by nicwolff at 9:27 AM on October 19, 2006


Check out scponly: http://www.sublimation.org/scponly/
posted by effugas at 9:28 AM on October 19, 2006


I'm afraid that you may have a few misconceptions about what is really possible. It is exceedingly hard to TRULY isolate users on a shared hosting setup. Let me explain.

You are attacking this from the angle of the shell that the user is allowed to use. This works to keep beginners and average users from doing bad things, but it won't do squat against a determined attacker. For example, if you allow your users to run PHP scripts then they basically have full access to the entire filesystem through PHP. In fact there are "php shell" scripts that essentially give you a full shell in the context of the Apache user (e.g. www-data, httpd, nobody, etc.) entirely through a web interface, so the shell setting you have for them in /etc/passwd is completely irrelevant. There is safe mode for PHP, but the PHP documentation is very clear that this is not a security catch-all, that safe mode should not be relied upon to secure a machine as there are always ways to get around it. (And some scripts fail with safe mode enabled, so in a hosting environment you can't always enable it.)

But even if you don't allow your users to have PHP scripts, there are still tons of ways to get around the shell restriction. For example, with cPanel the default was to assign users a restricted shell, but you could just create a crontab entry to run chsh and change your shell back to bash. So much for that idea. And if you allow your users to run CGI scripts like perl, then right there they have complete access to the entire system (in the security context of the apache user) regardless of what shell you may have given them.

The realization that I am trying to get you to make here is that unless you allow your users 100% static pages and no scripting, they will have complete access to the system in the context of whatever user Apache runs as -- access files, run commands, etc. This is the definition of scripting, after all. So farting around with restricted shells doesn't change this fact at all. Hell even without PHP and with only CGI a determined user could compile their own shell with a reverse-connect-back socket, upload it to their cgi-bin directory, hit that URL, and bam, they have a full shell in the context of the Apache user.

The only true way to rope them off is to ensure that all scripts run in the context of the user, and not the apache daemon. This can be done, I believe, with a combination of suPHP and FastCGI. But there could be a major performance hit to doing it this way, and there still might be holes. I recommend that instead of trying to pretend that the security model is something it's not, you embrace it and live with it.
posted by Rhomboid at 10:02 AM on October 19, 2006


Oh and I didn't really make it clear... the goal with suPHP and FastCGI is that this access would then be in the security context of their user account and not the "nobody" (or www-data, httpd, whatever it's called) account, and so you could set file permissions accordingly. But there will always be world-readable files and they will thus always have access to them, and they will always be able to run any command they want in the context of that user account. suPHP+FastCGI only allows you to switch it from being nobody (which must necessarily have read access to all customers web files) to the individual user account (which could conceivably have read access to only their files.)
posted by Rhomboid at 10:07 AM on October 19, 2006


Thank you, everyone. This has given me a lot to work with.

Yeah, it seems like having as many httpds spawned as sites I have would be ... problematic. I was volunteered for this role, despite my total Linux inexperience, so sorry if this is a dumb question.

Someone should write a book on web hosting in a hostile environment. :)
posted by adipocere at 4:16 PM on October 19, 2006


Oh--do what grouse said. Restrict your users to sftp--all you need to do is run sudo chsh /usr/lib/sftp-server.
posted by cytherea at 8:46 PM on October 19, 2006


« Older Is there a program that can ex...   |  What are all the 15 verses of ... Newer »
This thread is closed to new comments.


Post