Help me perform forensics on a Linux botnet member.
May 25, 2011 7:06 PM   Subscribe

Please share your resources and tips for analyzing a compromised Linux system for information about the attack.

A Linux system running in VMWare was compromised via the internet and loaded with IRC bot software. I've shut the system off and made a read-only copy of the disk for analysis.

I've attached it to a fresh install of Ubuntu in a separate VM and have been poking around the filesystem to gain information about how the attack worked. So far, I have a list of commands in /root/.bash_history that show a few commands to gather system information, a few ssh connections to presumably pull files, and I have the actual IRC bot software tarball and its associated configs.

What I can't figure out is the steps that might have been taken to gain access to the system to start with.

What resources are there online to help me compare the system's binaries and services against known vulnerabilities to help pinpoint which ones may have been used to gain initial access?

I have the capability to boot the infected system in a quarantine environment if necessary.

I'm not ruling out something simple like password brute-forcing, since this is a vendor-supplied special purpose Linux system with really silly default passwords and, apparently, no firewall, but any help along the lines of log files to examine to establish timestamps for the attack would be welcome.
posted by odinsdream to Computers & Internet (20 answers total) 12 users marked this as a favorite
 
Oh, also, are there any community projects that would be interested in having the IRC bot code itself?
posted by odinsdream at 7:08 PM on May 25, 2011


Take a look at Tripwire.
posted by Blazecock Pileon at 7:22 PM on May 25, 2011


Was this thing running a typical web server? If so the first thing you should be looking at is scripts, e.g. guestbooks, blogs, stat counters, photo albums. Those are notorious for having vulnerabilities that let the attacker onto the system to download and install bots. The apache error logs are also a good place to look.
posted by Rhomboid at 7:47 PM on May 25, 2011


Apache error and access logs. Plus, look for suspicious things in /tmp. For example, executables and perl/sh scripts shouldn't be there.
posted by sbutler at 8:43 PM on May 25, 2011


Back when I did a little more SA stuff, I made a habit of installing Rootkit Hunter on most of my boxes. I see it still appears to be actively updated. Nothing fancy - just a shell script that looks for indicators of common malware - but it might well find something.
posted by brennen at 11:13 PM on May 25, 2011


Tripwire is only useful if installed before hack, right? I would look in /var/log/apache2 as sbutler suggests. Try to salvage your apps and data on to a secure machine, then junk that one. What netstat, top to see IP's of current processes. File modification dates, 'ls -lt | more' in a folder shows you newest files first. Probably /var/log/messages or /var/log/syslog will have record of remote logins. Note the older data is rotated, i.e. messages, messages.1, messages.2...

That's every admin's nightmare.
posted by nogero at 11:17 PM on May 25, 2011


There's a lot of password brute-forcing about these days. I know several people whose weak Gmail passwords have let in black hats.

If it was a brute-force attack, you're likely to see a lot of occurrences of the word "failure" in /var/log/auth.log and its archived companions.

Debian-based systems including Ubuntu wipe /tmp on restart, so don't boot it again until you have a copy.
posted by flabdablet at 2:22 AM on May 26, 2011


What services were you running on the box? If you had logging enabled on your box did you ensure that the logs were sent over a socket to a remote box? If you didn't then the attacker could have cleaned up and/or manipulated the logs and you can't really trust them.
  • If you had a database e.g. MySQL or PostgreSQL, did you permit remote access and bind the database to your public internet IP address? If you did, did you also configure logging of failed authorisation attempts to the database? What versions of the databases were you running?
  • Were you running an HTTP server? What version? Have you checked the access logs for HTTP activity that were unusual in the run up to the breach? Stuff like unusual POSTs, weird GETs with "DROP ;" inside the queries? Did you even have verbose access logging enabled?
  • Were you running an FTP server? etc.
  • To what extent were you keeping your Ubuntu distro up-to-date? What version were you running? Were you on an Ubuntu Long Term Support (LTS) branch? When was the last time you updated the patches? Of the patches you didn't update, did any of the new patches affect your system?
  • Did you allow password logins via SSH, or did you lock it down to only permit public-key logins? Were SSH logins permitted from any IP address, or did you lock it down it only permit SSH access from IP addresses you controlled? Did you install fail2ban or denyhosts? If you did, did you check the output of the latter?
Basicially, run "ps -eaf" and, for every process in the list, think "Hmm. What is it? Is it publically accessible? What version is it? Is it up to date? Where does it log? How does it log?"
posted by asymptotic at 2:32 AM on May 26, 2011


I'm not ruling out something simple like password brute-forcing, since this is a vendor-supplied special purpose Linux system
Who's the vendor? Have you ruled out an attack by a disgruntled employee from the vendor? Does the vendor have some contractual ability to remotely access your machine? Nothing like insider knowledge when making effective attacks.
posted by asymptotic at 2:40 AM on May 26, 2011


asymptotic; just to clarify: This is not a system I set up or configured. It's a turnkey Linux-based system provided by a vendor. Everything I'm learning about the configuration and setup is a result of exploration, but logs were definitely not being shipped anywhere. So far it looks like the attacker wasn't cleaning up any tracks at all.

So, again, my question isn't about the Ubuntu system. That's the environment I set up just yesterday to mount the disk from the infected system.

The SSH logs have timestamps in a format I'm not familiar with, so it would be helpful to figure out what times these things represent:
drwxr-s---.  2 root  400 4.0K 2011-05-01 21:34 .
drwxr-xr-x. 47 root root 4.0K 2011-05-25 12:00 ..
-rw-r--r--   1 root  400  23K 2010-12-25 03:39 @400000004d16c44623b19c84.u
-rw-r--r--   1 root  400  53K 2010-12-26 03:27 @400000004d17199e080352ac.u
-rw-r--r--   1 root  400 485K 2011-01-04 01:27 @400000004d236fb02eee4e1c.u
-rwxr--r--   1 root  400 4.8M 2011-02-02 22:32 @400000004d4a21c421bd2754.s
-rwxr--r--   1 root  400 4.8M 2011-02-20 09:26 @400000004d6124a11958a7b4.s
-rwxr--r--   1 root  400 4.8M 2011-03-07 06:54 @400000004d74c7680f808f2c.s
-rwxr--r--   1 root  400 4.8M 2011-04-05 03:26 @400000004d9ac44733486524.s
-rwxr--r--   1 root  400 4.8M 2011-05-01 21:34 @400000004dbe0a3031408b1c.s
-rw-r--r--   1 root  400 3.2M 2011-05-25 05:20 current
-rw-------   1 root  400    0 2010-12-21 04:28 lock
-rw-r--r--   1 root  400    0 2011-01-04 14:06 state

posted by odinsdream at 5:32 AM on May 26, 2011


Which directory is that?
posted by flabdablet at 6:21 AM on May 26, 2011


/var/log/sshd

Each line in the files uses a similar timestamp format:
@400000004d14126605b29094 Failed password for root from 59.106.187.118 port 60202 ssh2
@400000004d141266115347a4 Received disconnect from 59.106.187.118: 11: Bye Bye
@400000004d1412691d8daa8c Failed password for root from 59.106.187.118 port 60619 ssh2
@400000004d141269296c02f4 Received disconnect from 59.106.187.118: 11: Bye Bye
@400000004d14126d1cf80dc4 Failed password for root from 59.106.187.118 port 60788 ssh2
@400000004d14126d291f90a4 Received disconnect from 59.106.187.118: 11: Bye Bye
The other logs, like secure and messages use YYYYMMDDHHMMSS format.
posted by odinsdream at 6:41 AM on May 26, 2011


Those timestamps look very much like the ones created by daemontools (a collection of apps for managing service written by D.J. Bernstein). The format is tai64n...see here for some details. My guess is that sshd is under management by daemontools or something similar to them. The idea is that the management widget will monitor and restart daemons automatically should they fail.
posted by jquinby at 7:14 AM on May 26, 2011


Do you have an earlier snapshot of the VM, or its original install image, or anything like that? Mounting its disk alongside the compromised system's and doing a recursive compare might be the simplest way to zero in on any modified files.
posted by hattifattener at 8:12 AM on May 26, 2011


jquinby: Thanks for the timestamp tip! I'm installing daemontools now so I can convert them into human-readable formats.

hattifattener; I will have that soon. The vendor is scheduling to come on-site to re-install the system, at which time I'll have a clean image to compare to.

Lovely hearing that according to them the system is "totally secure" and this kind of thing has "never happened to any other system."

Also that it had a firewall that permitted access only from their IP addresses, which is clearly proven incorrect by the logs.
posted by odinsdream at 8:43 AM on May 26, 2011


Yay! Here's the human-readable directory listing from earlier.
root@ubuntu:/mnt/vendor/var/log/sshd# ls | tai64nlocal
2010-12-25 23:27:40.598842500.u
2010-12-26 05:31:48.134435500.u
2011-01-04 14:06:14.787369500.u
2011-02-02 22:32:10.566044500.s
2011-02-20 09:26:31.425240500.s
2011-03-07 06:54:06.260083500.s
2011-04-05 03:26:53.860382500.s
2011-05-01 21:34:30.826313500.s
current
lock
state
And there we have the successful logins in a non-cryptic format:
2011-04-05 03:26:53.860382500.s:@400000004d89561f257f9a0c Accepted password for root from 121.14.5.111 port 49988 ssh2
2011-04-05 03:26:53.860382500.s:@400000004d89b83f16994d1c Accepted password for root from 92.81.48.85 port 10962 ssh2
2011-04-05 03:26:53.860382500.s:@400000004d89b9440f7f81a4 Accepted password for root from 92.81.48.85 port 10979 ssh2
China and Romania, respectively.
posted by odinsdream at 8:54 AM on May 26, 2011


The vendor is scheduling to come on-site to re-install the system, at which time I'll have a clean image to compare to.

You might want to install Tripwire at that time. If your system gets compromised, you can quickly find out which files were changed.
posted by Blazecock Pileon at 2:03 PM on May 26, 2011


You might also want to kick the vendor brutally and repeatedly with your steel-capped hobnailed boots until they agree to disable password-based ssh logins. Especially root logins. Jesus.

That business about "never happened to any other system" probably translates as "no other customer has complained to us about this" which, depending on sales volume and levels of Clue available at their other customer sites, I might be prepared to believe. I've acquired a troublemaker label with one of our own vendors simply by consistently pointing out glaringly obvious flaws; apparently most of their other customers (all of them are schools) are even more technically clueless than they are.
posted by flabdablet at 3:21 PM on May 26, 2011


I haven't heard back from them after sending them our security incident report. It's been... interesting... how they've reacted to a detailed examination of the basic flaws in the system security and configuration.

Thanks to all for the helpful tips and information to get this problem fully diagnosed.

Does anyone know of any communities that would be interested in the IRC bot code for research purposes?
posted by odinsdream at 6:32 AM on May 27, 2011


You might send it along to the folks at SANS...their Contact Us page has a place to attach malware for analysis. I took a deep packet inspection course from them awhile back, and they seem to be Good Folks.
posted by jquinby at 6:42 AM on May 27, 2011


« Older How to be safe on open and WEP encrypted wireless...   |   Help rescue my contacts! Newer »
This thread is closed to new comments.