Something is spewing files on our box
October 17, 2007 6:14 PM   Subscribe

On our Linux web server, for some reason /var/lib/php5 has filled up with files to the point that we can't even get a directory listing. We are running out of inode space for the filesystem. How can we get a directory listing to find out which files need to be removed, and remove them?
posted by Dag Maggot to Computers & Internet (7 answers total) 2 users marked this as a favorite
Best answer: Does

find /var/lib/php5

not give you a listing? If so, what's the error you get?
posted by eschatfische at 6:21 PM on October 17, 2007

Response by poster: Ah yes, that did it - thanks heaps! Turns out they are php session files.
posted by Dag Maggot at 6:32 PM on October 17, 2007

Stop the webserver.

Try: 'find /var/lib/php5 -ls > /tmp/bad_dog_bad'
Look through the output in /tmp/bad_dog_bad.

Then 'find -inode XXX -delete'.

You may have to read the man pages on 'find' for some help. Normal user stuff like 'ls' tends to try and read everything and sort it or some such, 'find' just reads the directory structure in "native" order without such stuff.
$ ls . | head
02 Track 2.wma

$ find . | head
Good luck!

(on preview, how do you make <pre> tags not double space?)
posted by zengargoyle at 6:36 PM on October 17, 2007

I've always been partial to

for i in *;do ls $i;done

That way it doesn't try to buffer it all into memory at once, it runs it a command at a time. Pretty lightweight.
I use it to comb through directories of 250,000+ files.

Find can be kind of CPU intensive.

posted by fnord at 7:05 PM on October 17, 2007 [1 favorite]

I wonder if ls -U would work as well as find? (I don't have a large directory to test this on.) The -U is "unsorted, directory order".

With preview, it looks like the <code> tags don't double space things.
posted by philomathoholic at 7:16 PM on October 17, 2007

It is buffering it all into memory, it's just the shell that is doing the work, and it's not trying to sort the results. 'for i in *' will read the entire directory before it starts passing individual filenames to the 'ls' command.

'for i in *; do ls $i; done' will break most shells. The shell will expand the '*' into a full list of files... (and without the sorting that plain 'ls' would do, 'echo *' would accomplish the same thing). If a plain 'ls' wouldn't work, or take forever with tons of swapping, a 'for i in *' wouldn't work either. Same problem, has to read all of the files before it starts processing. It's probably the sorting and swapping of memory that makes things grind to a halt.

The beauty of 'find' is that it works at a lower level. It reads the directory inode, then it starts with the first file inode and processes, then on to the next file inode (or recurses into another directory inode). 'find' might take a long time, but it will always be faster than any other method, and use almost no memory. 'find' is your friend.
posted by zengargoyle at 7:19 PM on October 17, 2007

WOW, seems 'ls -U' is supreme. (boring stuff follows).

$ mkdir blah
$ cd blah
$ i=0; while true; do touch $i;i=$(( i + 1 )); done
( wait a while and Ctrl-c)

$ ls | sort -n | tail -1
(number of files in this directory)

$ time ls -Ui > /tmp/junk
real 0m0.364s

$ time find . > /tmp/junk
real 0m0.399s

$ time ls > /tmp/junk
real 0m0.834s

$ time find . -ls > /tmp/junk
real 0m2.143s

$ time for i in *; do ls $i; done > /tmp/junk
real 4m24.637s

$ time echo * > /tmp/junk
real 0m1.375s

'ls -Ui' is now added to my cool fast things... Never thought that 'ls -U' would be faster than 'find .'.

(sorry, I spend too much time optimizing scripts.)
posted by zengargoyle at 7:58 PM on October 17, 2007 [2 favorites]

« Older iPod and iTunes are no longer on speaking terms.   |   Barrel Poor Newer »
This thread is closed to new comments.