Quickly delete a directory containing many files
April 16, 2007 2:07 PM Subscribe
In a Unix-type system, is there a way to quickly remove a directory that contains lots and lots of files?
So I ran a job overnight and was kind of careless when I programmed it. When I went to the directory where I was writing my log files and typed 'ls', the system basically sat there. Then I realized that I had written a separate log file for each trial, and I ran a lot of trials... I'm not sure how many files are in that folder but it could easily be a million. oops!
It's been spinning for an hour so far. It isn't accepting new connections - it stalls in the password check of ssh - so I can't kill the job. Once it finally finishes 'ls'ing, I want to remove that folder so I don't forget and 'ls' it again. If I 'rm -rf' it I expect it will have to page through the files and that could take just as long. What if I just change the name of the folder, will that be quick? Any other options?
So I ran a job overnight and was kind of careless when I programmed it. When I went to the directory where I was writing my log files and typed 'ls', the system basically sat there. Then I realized that I had written a separate log file for each trial, and I ran a lot of trials... I'm not sure how many files are in that folder but it could easily be a million. oops!
It's been spinning for an hour so far. It isn't accepting new connections - it stalls in the password check of ssh - so I can't kill the job. Once it finally finishes 'ls'ing, I want to remove that folder so I don't forget and 'ls' it again. If I 'rm -rf' it I expect it will have to page through the files and that could take just as long. What if I just change the name of the folder, will that be quick? Any other options?
you should kill the ls job though. One quicker way to see the dirs contents is "find . -ls".
posted by about_time at 2:14 PM on April 16, 2007
posted by about_time at 2:14 PM on April 16, 2007
Actually, the -d option to rm will apparently unlink the directory regardless of whether it's empty or not. Then, in theory, an fsck would deal with all the unreferenced files that are left on the disk.
This feels iffy to me, though, and I don't know enough detail to say whether this is a good idea or not. Just throwing it out there.
posted by chrismear at 2:18 PM on April 16, 2007
This feels iffy to me, though, and I don't know enough detail to say whether this is a good idea or not. Just throwing it out there.
posted by chrismear at 2:18 PM on April 16, 2007
mv yourdirectory somethingelse && rm -rf somethingelse &
posted by MarcieAlana at 2:22 PM on April 16, 2007
posted by MarcieAlana at 2:22 PM on April 16, 2007
Response by poster: I would love to kill the ls job but I can't. the system is non-responsive and is not accepting new connections.
posted by PercussivePaul at 2:23 PM on April 16, 2007
posted by PercussivePaul at 2:23 PM on April 16, 2007
Thanks for the -d flag (didn't known about that). The problem remains though, next time your system requires an fsck you're gonna end up with lots of crap in lost+found which you'll have to remove anyway. I suggest you rm -rf the directory, background the process and renice it and just leave it running..
posted by aeighty at 2:24 PM on April 16, 2007
posted by aeighty at 2:24 PM on April 16, 2007
Don't think most modern filesystems still allow unlinking directories, and even if yours does I suspect the fsck will be just as slow. Just move it off and rm -r later.
It shouldn't be blocking you from logging in though unless there system has per user process limits set up or something. It might be slow though, so try giving it some time.
posted by fvw at 2:29 PM on April 16, 2007
It shouldn't be blocking you from logging in though unless there system has per user process limits set up or something. It might be slow though, so try giving it some time.
posted by fvw at 2:29 PM on April 16, 2007
It may not wake up. Now is the time to fess up to your system administrator.
posted by grouse at 2:37 PM on April 16, 2007
posted by grouse at 2:37 PM on April 16, 2007
Best answer: Yeah, there aren't many reasons why ls'ing a directory, even a huge one, should cause the system to outright hang unless it's thrashing (low enough on memory that all of the CPU time is being used trying to rearrange processes to fit). Unless everything else is blocking on disk IO, I guess, but that'd be an awfully slow disk.
One way to get away with "rm -rf" once you're back in, without crippling the system, is to use "nice" to set the priority of the "rm -rf":
nice -n 19 rm -rf directory
will run "rm -rf directory" at the lowest possible priority.
posted by mendel at 2:51 PM on April 16, 2007
One way to get away with "rm -rf" once you're back in, without crippling the system, is to use "nice" to set the priority of the "rm -rf":
nice -n 19 rm -rf directory
will run "rm -rf directory" at the lowest possible priority.
posted by mendel at 2:51 PM on April 16, 2007
Response by poster: Yeah, the admin sits beside me and he can't get into the system either. It's not actually refusing connections - it just hangs after you type the password. He thinks the disks are thrashing or something like that. And we were going to hard reboot it but we just moved it to another server room and the guy with the key isn't here... fortunately the server is one used only by our research group so I didn't screw up anything important. :) Thanks guys.
posted by PercussivePaul at 3:02 PM on April 16, 2007
posted by PercussivePaul at 3:02 PM on April 16, 2007
Just FYI [and this may only apply to Linux systems], if the issue was too many files being present you would probably receive the "too many files" message itself when issuing a command on them. It could be the size of the files though.
posted by melt away at 3:09 PM on April 16, 2007
posted by melt away at 3:09 PM on April 16, 2007
Lots of core utilities can be stopped with ctrl-Z and then killed with kill %1, even if they're not responding to ctrl-C. So you might want to fire a ctrl-Z down your existing connection and see if you can persuade ls to see reason (it's probably the sorting of the directory listing that's taking up all the CPU time and possibly causing thrashing). If that works, then nice -n 19 rm -rf as mendel said.
posted by flabdablet at 3:39 PM on April 16, 2007
posted by flabdablet at 3:39 PM on April 16, 2007
Response by poster: The guy with the key showed up. We reset the server and tried to delete the offending directory, but there are some residual filesystem glitches. It will be dealt with appropriately. Thanks.
posted by PercussivePaul at 4:01 PM on April 16, 2007
posted by PercussivePaul at 4:01 PM on April 16, 2007
Glad you found a fix. One last thing for the search engines.. Many Unix file systems are designed so that operations on files in a directory are O(n) where N is the number of files in that directory. That makes deleting a directory with 100,000 files a very unpleasant O(n^2) mess. And since most of the work is going on in the kernel, a badly designed Unix can get in a bad state simply by trying to remove a bunch of logfiles.
File systems built since around 2000 tend to have fixed this problem. For instance, my memory is Linux ext3fs doesn't have the performance problem, whereas ext2fs did.
On a crappy Unix, once you get a directory with too many files there aren't a lot of good solutions for cleaning it out. Booting single user and doing an rm -r is one way. Or try to remove a few files at a time and then sleep. One time I had this problem it was faster to copy all the other files off the filesystem, then reformat.
posted by Nelson at 1:59 AM on April 17, 2007
File systems built since around 2000 tend to have fixed this problem. For instance, my memory is Linux ext3fs doesn't have the performance problem, whereas ext2fs did.
On a crappy Unix, once you get a directory with too many files there aren't a lot of good solutions for cleaning it out. Booting single user and doing an rm -r is one way. Or try to remove a few files at a time and then sleep. One time I had this problem it was faster to copy all the other files off the filesystem, then reformat.
posted by Nelson at 1:59 AM on April 17, 2007
Fwiw, "rm" isn't the lowest level you have access to. If you need more, then you should write your own "rm".
Open the directory,
read an entry,
loop while the entry is valid (not at the end of the list):
sleep for a few milliseconds,
if it's a directory, recurse back at the top
remove it,
read an entry
That will take a while, but won't thrash your machine. Even better, add a "--be-very-polite" flag to GNU rm and submit a patch.
posted by cmiller at 5:43 AM on April 17, 2007
Open the directory,
read an entry,
loop while the entry is valid (not at the end of the list):
sleep for a few milliseconds,
if it's a directory, recurse back at the top
remove it,
read an entry
That will take a while, but won't thrash your machine. Even better, add a "--be-very-polite" flag to GNU rm and submit a patch.
posted by cmiller at 5:43 AM on April 17, 2007
Glad it's fixed... but moving the directory won't halt the ls. The ls has already opened the directory and is iterating through its entries; changing its name in the parent directory will do nothing.
On any unix I've come across, an open file handle (including a directory as the result of opendir(3) or whatever) counts as a reference to the file so you can delete the file and anything that's got it open (ls in your case) can still see it until they close their descriptor.
Hanging after you type the password makes it sound like you've run out of processes. Why, I do not know.
posted by polyglot at 7:02 AM on April 17, 2007
On any unix I've come across, an open file handle (including a directory as the result of opendir(3) or whatever) counts as a reference to the file so you can delete the file and anything that's got it open (ls in your case) can still see it until they close their descriptor.
Hanging after you type the password makes it sound like you've run out of processes. Why, I do not know.
posted by polyglot at 7:02 AM on April 17, 2007
find /the/directory -exec rm -f {} \;
rm -rf /the/directory
posted by imagesafari at 4:50 PM on April 17, 2007
rm -rf /the/directory
posted by imagesafari at 4:50 PM on April 17, 2007
I had to deal with this problem, and the guy from Dreamhost support suggested:
find . -type f -exec rm -v {} \;
...which is the same basic idea that others had, but also shows you the progress of the command.
posted by smackfu at 1:19 PM on December 13, 2007
find . -type f -exec rm -v {} \;
...which is the same basic idea that others had, but also shows you the progress of the command.
posted by smackfu at 1:19 PM on December 13, 2007
This thread is closed to new comments.
posted by chrismear at 2:12 PM on April 16, 2007