Linux server monitoring
July 24, 2023 8:41 PM   Subscribe

I need to detect filesystem changes across several dozen machines. Difficulty level: no root access. More inside.

Because reasons, I am partly responsible for detecting changes across several dozen Linux machines, without having root access per se. (We do have a shared admin account with certain privileges.) I'd like to set up some kind of automation to alert me whenever someone makes a change to any file on any of the systems in question--or, rather, to catch it every time it happens and then send me a summary email-or-whatever once a day, so that I'm not being constantly pinged.

Checking file-modification times is/was a good start. However, I also need to know if files were deleted outright--in which case they would not show up in [ls -lh], [find], or really any solution I know of.

I was thinking about [ls -lh]ing every machine once a day, saving the results off somewhere else, and then finding the differences between e.g. today's results and yesterday's, to detect any files that were added/modified/deleted. But given the size of some of these systems, it's taking forever--not just to code, but also to run. Long story short, I'm hoping someone has already designed an app for this, which would be faster--in terms of installation and actual runtime--than what I'm building.

Requirements:
-Must run on Unix
-Must print out results in some emailable form (so that I can read the output once a day); preferably as .csv or .txt
-low cpu/mem footprint preferred

Bonus: many of these systems have daily auto-updates that don't particularly interest me. What I'm trying to catch is individual users making one-time changes. (No, I cannot lock them out.) If an app for this exists, I'm hoping it would also let me filter out some run-of-the-mill changes (which I would specify by hand as not-important), while escalating only the truly uncommon ones to my attention.
posted by queen anne's remorse to Technology (16 answers total) 2 users marked this as a favorite
 
Best answer: What flavor of Linux? If you have a RedHat distro you should already have the auditd service which will allow you to track file deletion. (If not, and you have access, you should be able to get it with "yum install auditd".) Configure it to only track file deletes in the directories / paths you want, and query daily using the ausearch command. You might have to adjust the search over time to hide deletes that occur normally which you don't care about.

Beyond that, depending on the extensiveness / size of your directory tree, using find to locate newly changed files would be the simplest thing I can think of. Just run once a day and 'find' files modified within 1 day. Both 'find' and 'ausearch' just create some standard text output that you can pipe to a file and e-mail to yourself. You should also be able to easily parse these outputs (grep + awk + sed?) to make something a little more 'report' like if you wish.
posted by SquidLips at 9:04 PM on July 24, 2023 [3 favorites]


Best answer: Check out inotifywatch from the inotify-tools package.
posted by mbrubeck at 9:07 PM on July 24, 2023 [5 favorites]


Best answer: To prevent the system from crashing for lack of resources, inotifywatch has default limits for non-root users (8192 watchers, if I remember right). If you're aiming to track every file on the system, that may be an issue. Unless the scope of what you need to track/watch can be constrained, this is probably something you will need to talk to someone with root privileges about, unfortunately.
posted by They sucked his brains out! at 9:12 PM on July 24, 2023


Best answer: Is there any sort of security logging service on these machines, which sees filesystem events? I am only familiar with Unix Systems Services which has a *nix interface on z/OS machines, but on those systems there's a System Management Facility (SMF) which most pieces of software use for logging events, accounting information, and other things including security events. A system-level logging facility would let you read the security files (with appropriate access rules) at the end of each day and create your summary of the events.
posted by TimHare at 9:37 PM on July 24, 2023


Best answer: In the olden days we used tripwire for this sort of thing.
posted by wierdo at 10:08 PM on July 24, 2023 [3 favorites]


Best answer: From memory, AIDE was a free command-line equivalent for tripwire. It looks like the man page hosting is messed up, so here's a mirror of the manual.
posted by Pronoiac at 2:51 AM on July 25, 2023 [2 favorites]


Best answer: Fully acknowledging this answer is somewhat tangential to the question as asked, but the Bonus paragraph at the end has me wondering a bit.... any chance the subset of files is small enough or specific enough to convince management that a code repo like Github is in order? At least for some of the files? Even if there's no sort of build system, can you let people continue to make their changes locally but also ask them change the repo copy too? Or could the local copy be a checked-out copy of the repo and you commit the changes?

The thought of tracking changes across a dozen servers made by randomish people at randomish times without a repository sounds like a nightmare. Hopefully one of the more direct solutions mentioned above works out, but if a repo hasn't been considered (or hasn't been considered lately given possible changes in the overall context) I offer it as an idea.
posted by Press Butt.on to Check at 5:02 AM on July 25, 2023


Best answer: I was going to suggest auditd as well.
posted by number9dream at 6:04 AM on July 25, 2023


Best answer: What I'm trying to catch is individual users making one-time changes. (No, I cannot lock them out.) If an app for this exists, I'm hoping it would also let me filter out some run-of-the-mill changes (which I would specify by hand as not-important), while escalating only the truly uncommon ones to my attention.

I don't think I understand your risk/threat model in this, and can't see a solution that is going to effective without elevated permissions.

That said, this kind of thing is why remote & centralized logging exists, and when it shines.

You can use auditd as mentioned above or inotifywait to monitor directories for changes in folders to which the user has access permissions. A combination of this and a centralized logging server like syslog-ng will give you an easy way to see filesystem-level changes as they happen and automatically, rather than your doing a bunch of weird custom work every day. It also feeds into existing tools that will enable you to do larger trend analysis over time with a higher-level view of the entire set of machines.

I second Press Butt.on To Check's point that some clarity about what you're trying to accomplish here (do you want to convince management of something? Do you want to figure out who is downloading something they shouldn't or sabotaging other people's work?) would help us figure out what you need. But you need to approach this like you want to use existing, well-established resources do the work for you, not like you want to roll your own weird one-off thing.
posted by mhoye at 6:24 AM on July 25, 2023 [1 favorite]


Best answer: Isn't this the sort of thing that SE-Linux and FLASK are for?

See, for instance, Enhancing Linux security with Advanced Intrusion Detection Environment (AIDE).

Advanced Intrusion Detection Environment (AIDE) is a powerful open source intrusion detection tool that uses predefined rules to check the integrity of files and directories in the Linux operating system. AIDE has its own database to check the integrity of files and directories.

AIDE helps monitor those files that are recently changed or modified. You can keep track of files or directories when someone tries to modify or change them.


The buzzword you want to search for is 'File Integrity Monitoring.'

several dozen Linux machines....must run on Unix

What actual Linuxes does it need to run on?
posted by snuffleupagus at 6:40 AM on July 25, 2023 [1 favorite]


However, I also need to know if files were deleted outright--in which case they would not show up in [ls -lh], [find], or really any solution I know of.

I think one of the direct solutions is likely going to be a lot better if it can work, but it’s worth pointing out that if you just save and then diff a file index with modification times, one of these solutions would be doable with no permissions at all for simply detecting changes/deletions, though far from instant/efficient. I believe rsync has some logging modes that might already be geared to what you want, but find can generate this for sure.

Agreed that knowing the actual goals would help answerers here…
posted by advil at 6:41 AM on July 25, 2023


Best answer: If this were me, I would use groups + sudo and force anyone who wants to do changes to go through sudo, and then centralize the sudo logs with something like fluentd. But setting up something like that would require root on the boxes so you'd have to coordinate with your app engineering team or whoever manages the infrastructure.
posted by cmm at 6:56 AM on July 25, 2023


Best answer: Aside from AIDE on SE Linux, Samhain is a currently maintained open-source project that should work across Linux and other Unix environments, with a consulting arm for paid support. Per its FAQ it's built on top of the inotify facility mentioned above.

OSSEC appears to be another option; though it's a fuller IDS, not just for file integrity.
posted by snuffleupagus at 7:07 AM on July 25, 2023


I’m also interested in what you are trying to solve. One billion years ago, I was asked for a similar solution for a lab that I administered, but it turned out that it was far more effective to use a tool that restored the computers to a known good state on restart. Then we didn’t have to care what our users did, it was all blown away regularly.
posted by rockindata at 7:53 AM on July 25, 2023 [2 favorites]


Best answer: Yeah, a use case would be helpful. The solution might be restricting anything touchable by users to revertable and/or copy-on-write filesystems you can easily audit and roll back. Rather than getting up your own ass doing Stasi sysadmin across Too Many Hosts.
posted by snuffleupagus at 6:29 PM on July 25, 2023


Best answer: A recent Reddit discussion with some cruder solutions that might be adjusted for your needs: https://www.reddit.com/r/linuxadmin/comments/159z0ug/track_new_files_newfilestxt/
posted by wenestvedt at 5:40 AM on July 26, 2023


« Older Vancouver Island in December   |   Give me your experiences and tips on reducing... Newer »

You are not logged in, either login or create an account to post comments