Endpoint Protection

 View Only

Responding to a Brute Force SSH Attack 

Dec 03, 2008 02:00 AM

by Jamie Riden

It was a bad start to a Monday morning: I arrived at work to find the intrusion detection system so bogged down in alerts that it was barely responsive.

Something bad had happened over the weekend. The IDS — in this case, a couple of snort sensors logging to a postgresql database — had been extremely busy logging alerts over pretty much the whole weekend. To review the alerts, I used the BASE front-end, and it was this latter that was taking such a long time to tell me anything, since it was querying a database which was around ten times as large as I had originally envisaged using in production.

A few minutes digging in the BASE console suggested that most of the 200,000 alerts had been generated by the potential SSH scan rule from Bleeding Threats. Since the usual daily load was nearer 20,000 alerts, it was a fair guess that a lot of malicious activity had been going on over the weekend. The snort rules that were firing were mainly the latter out of the two shown below, but this would depend on the location of your snort sensor and how you have $HOME_NET and $EXTERNAL_NET defined.

 #Submitted by Matt Jonkman alert tcp $EXTERNAL_NET any -> $HOME_NET 22 \ (msg: "BLEEDING-EDGE Potential SSH Scan"; \ flags: S; flowbits: set,ssh.brute.attempt; \ threshold: type threshold, track by_src, count 5, seconds 120; \ classtype: attempted-recon; reference:url,en.wikipedia.org/wiki/\ Brute_force_attack; sid: 2001219; rev:14;) 
 alert tcp $HOME_NET any -> $EXTERNAL_NET 22 \ (msg: "BLEEDING-EDGE Potential SSH Scan OUTBOUND"; \ flags: S; flowbits: set,ssh.brute.attempt; \ threshold: type threshold, track by_src, count 5, seconds 120; \ classtype: attempted-recon; reference:url,en.wikipedia.org/wiki/\  Brute_force_attack; sid: 2003068; rev:2;) 

For those of you who don't speak snort, this means fire an alert when the rate of connections to port 22/tcp from a particular source goes over a certain threshold. In other words, one of the hosts on site was port-scanning other machines all over the Internet.

This particular site's network had a collapsed backbone topology, which makes traffic monitoring a fairly straightforward affair, as long as you can remember a few IOS commands to set up your SPAN port and have a server capable of dealing with a couple of hundred megabits per second of IP packets. I had also taken the precaution of loading the entire Bleeding Threats snort ruleset onto the IDS and gradually tuned it over the previous few months, so I was fairly certain that I had a real problem at this stage.

 

Incident Response

One thing that helped immensely at this point was my database of machines, consisting of IP and MAC addresses, host names and the people responsible for each resource. Since we know that a TCP handshake cannot, in general, complete if the source address is forged, I could simply look up the name of the owner from the IP address in the alerts, and then look up the phone number of that person in the internal telephone directory.

The owner was surprised to hear of the activity, so I immediately disconnected the machine from the network. Here, again, it was great to have a database mapping the MAC address of the machine to the particular Ethernet switch it was connected to. This information was updated nightly by a script which walked all our Cisco switches and noted which MAC addresses had been seen on which network interfaces. (If you would like something similar, netdisco is a more polished solution than our home-brewed scripts.)

I then logged into the upstream switch and typed in my favorite, and most used, IOS command:

 conf t interface fastEthernet 0/15 shutdown 

I can only recommend you try to build similar databases so you can look up equipment details quickly and easily during an incident. That way, you can find computers which are not where they're meant to be, are spoofing their IP address, or have decided to become an authoritative DHCP server for your whole network.

For example, p0f is an excellent tool to run on a SPAN port so you can see exactly what hosts are active at what times — the only problem is that the size of the output is huge. An example of dealing with p0f output on a SPAN port is described in Taming p0f by chunk processing STDIN. This data can then be imported into your favorite database.

At this point I booted the computer using a Knoppix-based distribution so I could take a look at the filesystem. If you think there's any chance you're going to end up in court, you need to be a lot more careful than I was. In this case, I correctly guessed that the damage was minimal and the chance of prosecuting was nil. These days I would use the Backtrack bootable CD which is designed specifically for this sort of thing.

Evidence Gathering

Of course, by powering off the computer, I lost any information that was stored in the RAM alone, but I was after a quick analysis in case I had other computers to look into — at this stage it was not clear how widespread the problem could be. One of my favorite forensics texts is Forensic Discovery by Dan Farmer and Wietse Venema which goes into a lot more detail about recovering information from disk and memory. You should at least keep a log book of what you're doing at each stage, and keep an audit trail of each piece of evidence you collect.

Initially, the contents of the /var directory was captured to a laptop via a network cable and the invaluable netcat utility.

 server% nc -l -p 31337 > evil.tar victim% tar cvf /tmp/evil.tar /var ; \ cat /tmp/evil.tar | nc -p 31337 server.ip.addr server% tar xvf evil.tar 

Examination of the /var/log/auth.log revealed password guessing attempts against the openssh daemon. Since the victim machine was attempting to scan other networks for SSH servers, it was reasonable to suppose it had been compromised by a guessed password, and in turn was probing for weak passwords. This topic had also been covered on the invaluable ISC diary not too long before the incident occurred.

The sshd log entries looked like this:

 Jun 26 22:31:04 victim sshd[15384]:   Failed password for root from ::ffff:w.x.y.z port 30937 ssh2 Jun 26 22:31:06 victim sshd[15386]:   Illegal user network from ::ffff:w.x.y.z Jun 26 22:31:06 victim sshd[15386]:   error: Could not get shadow information for NOUSER Jun 26 22:31:06 victim sshd[15386]: Failed password    for illegal user network from ::ffff:w.x.y.z port 30951 ssh2 Jun 26 22:31:08 victim sshd[15388]:   Illegal user word from ::ffff:w.x.y.z Jun 26 22:31:08 victim sshd[15388]:   error: Could not get shadow information for NOUSER Jun 26 22:31:08 victim sshd[15388]: Failed password    for illegal user word from ::ffff:w.x.y.z port 30963 ssh2 Jun 26 22:31:10 victim sshd[15390]: Failed password    for root from ::ffff:w.x.y.z port 30980 ssh2 Jun 26 22:31:11 victim sshd[15392]: Failed password    for root from ::ffff:w.x.y.z port 30992 ssh2 Jun 26 22:31:13 victim sshd[15394]: Failed password    for root from ::ffff:w.x.y.z port 31007 ssh2 Jun 26 22:31:15 victim sshd[15396]: Failed password    for root from ::ffff:w.x.y.z port 31021 ssh2 Jun 26 22:31:17 victim sshd[15398]: Failed password    for root from ::ffff:w.x.y.z port 31031 ssh2 Jun 26 22:31:19 victim sshd[15400]: Failed password    for root from ::ffff:w.x.y.z port 31049 ssh2 Jun 26 22:31:20 victim sshd[15403]: Failed password    for root from ::ffff:w.x.y.z port 31062 ssh2 Jun 26 22:31:22 victim sshd[15405]: Failed password    for root from ::ffff:w.x.y.z port 31073 ssh2 

We can see that the attacker — IP address w.x.y.z — was looking for user accounts with weak passwords, and making obvious guesses at passwords to the root account. In the end, some time over the weekend, they found the account called upload with a password of 'upload' and got a shell on the server. The last command revealed the origin of the successful login:

 upload pts/0 Mon Jun 27 07:39 - 07:49 (00:09) evil.example.com.ro upload pts/1 Sun Jun 26 23:10 - 23:10 (00:00) evil.example.com.ro upload pts/1 Sun Jun 26 23:01 - 23:09 (00:08) w.x.y.z 

Oops.

The attacker had not really attempted to cover his or her tracks at all, which made the analysis surprisingly quick and easy. For example, the .bash_history file was still intact and had a complete list of commands that had been executed since the attacker logged on. It could have been a cleverly planted fake, or had portions deleted, but the general skill level displayed suggested not.

Some of these commands were aimed at downloading various archives from the internet, unpacking them and executing programs within them. They were semi-hidden by storing them in a directory /tmp/. (that's a dot space as the subdirectory name). One of these files, local.tar.gz was a good collection of privilege escalation exploits — that is, programs that run as a normal user would give root access if the machine was vulnerable to that particular exploit. For example, one program was called do_brk and attempted to gain root using the Linux do_brk issue. Fortunately the machine was patched up to date, and the attacker had to make do with their unprivileged user account. After a while, they obviously gave up and turned to other things.

One of the kits that the attacker downloaded from the Net was obviously designed for sending phishing email to eBay users. Fortunately, the machine had not been set up to exchange email with our central mail server — we had taken the precaution of banning port 25/tcp outbound at the firewall for everything that wasn't a corporate mail server. Because the mail couldn't be delivered, this is the bounce that ended up in the user's mailbox:

 From MAILER-DAEMON Mon Jun 27 07:54:24 2005 Return-path: <> Envelope-to: upload@victim Received: from mail by victim.fqdn.example.com with    local (Exim 3.36 #1 (Debian)) id 1DmdCy-0005h9-00 for <upload@victim>; Mon, 27 Jun 2005 07:54:24 +1200 X-Failed-Recipients: entdbiz@yahoo.com From: Mail Delivery System <Mailer-Daemon@victim> To: upload@victim Subject: Mail delivery failed: returning message to sender Message-Id: <E1DmdCy-0005h9-00@victim.fqdn.example.com> Date: Mon, 27 Jun 2005 07:54:24 +1200 This message was created automatically by mail   delivery software (Exim). A message that you sent could not be delivered to one   or more of its recipients. This is a permanent error.    The following address(es) failed: entdbiz@yahoo.com unrouteable mail domain "yahoo.com" ------ This is a copy of the message, including all the headers. ------ Return-path: <upload@victim> Received: from upload by victim.fqdn.example.com    with local (Exim 3.36 #1 (Debian)) id 1DmdCy-0005h5-00 for <entdbiz@yahoo.com>; Mon, 27 Jun 2005 07:54:24 +1200 From: ***Urgent Safeharbor Department    Notice*** <service@eBay.com> To: entdbiz@yahoo.com Subject: eBay Fraud Mediation Request Content-Type: text/html Message-Id: <E1DmdCy-0005h5-00@victim.fqdn.example.com> Sender: Upload acct <upload@victim> Date: Mon, 27 Jun 2005 07:54:24 +1200 Status: Final 

This is the local mailer daemon complaining that it couldn't deliver the latter part of the text, which is an email purporting to be from service@ebay.com, and which was asking for people to enter their account details on a bogus website.

Another kit downloaded was clearly intended to scan other computers, in the same manner that the compromised computer had been scanned. One of the utilities within the kit builds a list of Internet hosts which had port 22/tcp open. Another component takes the list of hosts and runs it through a password dictionary, both against the root account with many different passwords — such as, admin, 123, 123123, toor, and password — and also against user accounts, typically with blank passwords or using the compromised account's username. It was the use of this latter utility that was tripping the IDS alerts.

Recovery

 

To recover the machine, we made a backup of some of the data on it — but no executable content — and the system was re-built from CDs. There was no evidence that the attacker had managed to install a rootkit, but then they may have just been really clever at hiding their tracks. We don't want to take any unnecessary risks. Passwords were audited on the newly installed machine, and the ssh daemon was moved to an alternate port as part of a defense-in-depth policy.

If your workplace has implemented ITIL, you might think that your first responsibility is to get the customer back up and running as soon as possible. However, you need to balance haste with getting enough information to contain and eradicate the threat, rather than just immediately wiping the machine and re-installing. Fortunately, in this case, it was only one machine that was affected, but there were several others on site which could have been compromised in the same manner.

Prevention

After you've sorted out your immediate problems, don't forget that a vital part of incident response is trying to prevent such things occurring in the future.

This compromise could have been prevented in a few ways. First, I prefer to run ssh daemons on an alternate port when accessible to the outside world — just in case someone does decide to set a very poor password. It also cuts down the amount of rubbish to wade through in your log files. If you have users besides yourself on a machine, you can install pam_cracklib or john the ripper to regularly audit the password strength of user accounts.

I also called the other users who were administering their own SSH servers which were accessible from outside the firewall and suggested they take similar precautions.

The IDS had demonstrated that it worked as designed, but at this point you might wish to get it to page you if it ever sees outgoing SSH brute force attempts, so you can respond more quickly to any future incidents of this sort.

Conclusion

We had a false sense of security about Linux machines on our site. None have ever been compromised due to security vulnerabilities within the software, but several have as a result of misconfiguration on the part of the administrators. We were also too busy watching for Windows worms, such as Slammer, Blaster and Welchia rather than looking out for more general types of port-scanning. The IDS was configured to page me in the case of a large amount of port-scanning on ports important to Windows, but not for SSH at that stage.

People also have some misconceptions about SSH, as they do with VPN. Yes, it is a secure way of transmitting data, but that just means the attacker has a secure tunnel to your internal network if you misconfigure it! Using encryption also makes your job of using the IDS harder, as it can't see the contents of the attacker's shell session.

Having a database of all your equipment and your network topology is invaluable when responding to incidents. Together with the IDS alerts and a regular p0f dump of what machines were actually active on the network each day, it gives you a superb view of what is actually happening during an incident. You may want to add additional tools such logging a each machine's OS and patch level on a regular basis, or how many outbound packets your firewall is dropping for each machine. After you've dealt with an incident, try to think what data could have helped you and whether you could collect it as a matter of routine.

This article originally appeared on SecurityFocus.com -- reproduction in whole or in part is not allowed without expressed written consent.

Statistics
0 Favorited
0 Views
0 Files
0 Shares
0 Downloads

Tags and Keywords

Comments

Jun 09, 2011 02:57 AM

superb detail

Related Entries and Links

No Related Resource entered.