How do I kill terminal without also killing the script?
November 8, 2010 11:01 AM

How do I allow a remote process to continue after I close my terminal window?

I run simulations on Amazon's EC2 cloud for my work. I usually launch an instance, then connect to it from my PC and run a script. This can take anywhere from 1 - 9 hours but that's ok because I don't really use my work PC for much else and it stays on all the time.

However, that desktop is now out of commission and I need to do this from my mac. The problem is that I cannot leave my laptop on for 9 continuous hours since I need to move between my office->coffee shop> home. So is there a way for me to ssh to my server, launch a script, and close that window without killing the process? I don't need to check on the process while it runs because the output is written to disk which I can retrieve later.
posted by special-k to Computers & Internet (10 answers total) 6 users marked this as a favorite
GNU Screen.
posted by kdar at 11:03 AM on November 8, 2010


Can you run screen or nohup?
posted by geoff. at 11:06 AM on November 8, 2010


To make it survive the shell dying when you start the process, use nohup.
After you start the job, you can ctrl-z it to the background, bg it, then disown it. (help disown. it's a bash builtin)
Screen and tmux are other options that keep a server terminal session alive that you can reconnect to.
posted by mkb at 11:15 AM on November 8, 2010


Screen seems to work (I followed this tutorial) on my snow leopard and Ctrl + A d seems to do the detaching. Then when I close terminal, launch a new window and enter screen -r, I get back to where I was. Great. But I still don't see how I would do this for a remote server.

Please tell me if this is the right order.
ssh to server (Its an ubuntu box by the way)
screen
launch script
detach

Then close the window. Hours later I open terminal again, connect to server, and then do screen -r to get right back to where I was before?

The other way doesn't sound right. If I launch terminal first, then screen, then ssh to server, launch script, detach and close window, doesn't my server think that I have now terminated the connection?
posted by special-k at 11:17 AM on November 8, 2010


If you do anything at all in a unix terminal ever, you need to learn about Screen. It is a life-changing piece of software.
posted by mhoye at 11:18 AM on November 8, 2010


For some reason I rarely have this problem, I think it depends on some setting somewhere that I'm ashamed that I don't know off-hand. To test, I just created a little shell script that waits 60 seconds and then echos to a file. I ran it in the background (on an EC2 instance running Ubuntu 10.10), logged out, logged back in and it was still running (sleeping).
zen       3773  0.0  0.1   4544  1072 ?        S    20:05   0:00 /bin/bash ./test.sh
It finished normally and wrote to it's output. I also routinely start up things like this on Solaris without using nohup with no problems. (I do understand and use screen/tmux/vnc when applicable) The only issue I ever run up with is ssh wanting to hang because of open file descriptors which is fixed with a bit of redirection. I remember using nohup back in the 80s, I somehow suspect that there's some shell variable set somewhere that controls whether or not to worry about SIGHUP signals. You might be able to get away with simple daemonization if your script doesn't doesn't require input during its lifetime. i.e:
$ ./script </dev/null >output.txt 2>error.log &
Who knows why I haven't had problems not doing the whole nohup/session disconnect thing?
posted by zengargoyle at 12:21 PM on November 8, 2010


Screen might be overkill. nohup is a pretty simple command to do exactly what you are describing.

(Be careful though, because processes can run away very easily this way without you knowing...)
posted by schmod at 12:30 PM on November 8, 2010


Please tell me if this is the right order.

Yes, that's the right order. You want to run screen on the remote machine that the process runs on. You can also run the script in the background (without screen) and have it output to a logfile that you read however you like, with `scriptname & >/home/special-k/ec2.log 2>&1`
posted by rhizome at 12:36 PM on November 8, 2010


I'm assuming you're talking about linux, not windows, on the server. In which case screen is certainly a great tool but it's overkill. Just do this:

$ nohup whatevercommand > myoutput &

And log out. Your process will be running and dumping the output into myoutput. You can use "ps -ef" to get a list of all running processes, and filter that through grep to see if your command is still running. This takes a little practice to get right.
posted by chairface at 3:21 PM on November 8, 2010


In this instance, you're definitely looking for nohup, and not screen - you don't need to check on the process, you just need to launch it and keep it running after you log out.

that being said, you definitely want to learn to use screen...except that you might not. If you're starting from scratch in the world of terminal multiplexers (of which screen is the most well known) you might want to look into tmux. Written as a direct response to the lack of support/development on the screen codebase, it's lighter-weight while retaining all the useful features of screen, plus you can split buffers vertically as well as horizontally. It also features easier-to-understand configuration options than screen.
(to compare: my .screenrc, with comments, is twice the size of my .tmux.conf to accomplish the same things)
posted by namewithoutwords at 5:41 AM on November 9, 2010


« Older Sudden-onset snoring!   |   Still Can't Fix the Fix for the Firefox Fix Newer »
This thread is closed to new comments.