by Charl van der Walt
|Assessing Internet Security Risk, Part Three: an Internet Assessment Methodology Continued
by Charl van der Walt
last updated July 30, 2002
This article is the third in a series that is designed to help readers to assess the risk that their Internet-connected systems are exposed to. In the first installment, we established the reasons for doing a technical risk assessment. In the second part, we started to discuss the methodology that we follow in performing this kind of assessment. In this installment, we will continue to discuss methodology, particularly visibility and vulnerability scanning.
At the start of this phase we're armed with a list of IP addresses that serve as targets to be attacked or in our case, analyzed. Our approach to analyzing (or attacking) a machine will depend on what network services that machine is offering. For example, a machine could be a mail server, a Web server, a DNS server or all three. In each case, our approach for the next phases will vary somewhat. Moreover, our strategy will vary according to the operating system of the machine in question and the precise version of the services it offers. During this phase, our precise objectives can be described as follows:
This information will allow us to plan and execute the remainder of the assessment.
It should be easy to see how we determine the active services on a machine. Once again we haul out our trusty port scanner. This time, we configure the scanner to test for all the ports and aim it at the machines, one by one. The port scanner will return a list of all the ports that are open on the machine. As services and applications generally use the same port numbers, we are able to derive a list of the active services on the host and thus an indication of the host's function.
Let's examine briefly how a port scanner actually works. To do this, we first have to revisit the TCP connection. A TCP connection is established by means of a three step handshake protocol, which can depicted as follows:
This is exactly the process the port scanner needs to go through to establish whether a given port is open or not. The handshake is executed for each of the ports specified in the range. Whenever the handshake is successfully completed, the port is recorded as open. Notice that no data needs to be transferred and the client doesn't have to interact with the server at the application level. Once again, Nmap, is the probably the tool of choice.
You're probably tiring of bullet lists by now, but once again there are a few facts that need to be noted:
Apart from the TCP handshake, most networked applications have their own handshaking protocol that commences only after the TCP connection has already been established. Somewhere within the handshake there's usually a way to request the service's name and version. Indeed, some services display their version details right at the start, before the handshake commences.
Thus, we use a simple utility called a banner grabber. The grabber is programmed to know how to extract the version information from a number of common network services. We aim it at a list of addresses and it will pull the banner information for any services it recognizes. In this way we can also better understand whether the application we expect really is listening behind a given port. There a number of programs that will do banner grabbing for you. A number of port scanners do this (e.g. Super Scanner), as do several vulnerability scanners (like Nessus). You can also do it using a simple Telnet session, if you know the protocols well enough.
So, now we know what ports are open on a given host. We also know the type of application and the version that is listening behind each of these ports. In many cases this is all the information we need to identify the operating system. For example, the Telnet port (23) is open on a given IP. We use Telnet to connect to it:
# telnet 196.3x.2x.7x Trying 196.3x.2x.7x... Connected to xxx.xx.co.za. Escape character is '^]'. HP-UX u46b00 B.10.20 A 9000/831 (ttyp1) login:
Pretty obvious, huh? There are other, only slightly less obvious, clues also. For example, the IIS Web server only runs on Microsoft machines, Sendmail is probably a Unix mail server, and ports 264 and 265 are a dead give away for a CheckPoint Firewall-1.
In addition to the simple tricks described above, we can also use a sophisticated technique called "OS fingerprinting" to identify the operating system. For an excellent paper on the issue, consult www.insecure.org. (This is the home of Fyodor - author of the acclaimed Nmap port scanner.) Basically, OS fingerprinting is a bit like identifying someone's nationality from their accent. Many operating systems communicate on the Internet with some unique characteristics that make them distinguishable from other operating systems. Programs like Nmap and Queso and many of the commercial security scanners have databases of OS signatures against which the "accent" of a particular operating system can be compared. Some of the characteristics that are examined include unique combinations of open ports (ergo our example earlier), TCP initial sequence numbers, and responses to unexpected requests.
While considering the use of OS fingerprinting, the following caveat should be noted. This technology is guess work at best and many OSs share the same signature and are therefore indistinguishable over the network. Moreover, servers are often housed behind firewalls and other security devices that may mask the real OS. A good fair deal of common sense needs to be applied when interpreting fingerprinting results.
All the technical inadequacies notwithstanding, at the end of this phase we're armed with as much information about each target machine as we can possibly extract. With this information in hand, we can now plan and execute the rest of our analysis.
Before we progress with the rest of the methodology, let's take a moment to revisit where we are in terms of the bigger picture. You'll remember the question that started the whole process: what are the threats that my Internet-connected systems face and what are the chances of those threats being realized?
We started our quest with a reconnaissance exercise, through which we attempted to identify all the Internet-facing infrastructure that is relevant to the target organization. We did this with a number of steps. Firstly, we used various different resources to build a list of all the DNS domains that might be relevant to the target. We then used a set of DNS tools to map those domains to single IP addresses, which we in turn expanded to complete subnets on the (slightly brash) assumption that the IP addresses of a single organization are most often grouped together in the same space of the Internet.
We then explored the fact that the existence of a DNS name-IP mapping does not necessarily prove that that IP is actually alive and active on the Internet. In order to narrow our vague list of subnets to a more accurate list of single IP addresses that are actually "active" on the Internet we ran a set of vitality tests - designed to determine as accurately as possible whether a given IP address can somehow be reached on the Internet. The vitality tests included core router queries, ping scans and TCP host scans.
Having built a list of IP addresses that are alive and well on the Internet, we set out to fingerprint those addresses - essentially to identify the operating system, the active services and the versions of those services for each address in our list. This information is the key input for our next phase - vulnerability discovery.
Before we can discover vulnerabilities on our targets, I guess we need to understand what a vulnerability actually is. My Oxford dictionary defines a vulnerability as an "exposure to danger or attack." Why don't we simply think of a vulnerability as a weak point on a host or a system that may allow some form of security compromise. Let's explore a few different ways of compromising a system:
Amazingly, a huge number of these kinds of vulnerabilities have already been identified, publicly raised and discussed in some forum. There are a number of sources for new vulnerability information. The first is the vendors themselves. When a software vendor becomes aware of a security vulnerability in one of their products they create a patch, or design a work-around and then publish a notification informing their clients of the problem and how it can fixed.
Many security product vendors also have researchers dedicated to discovering and documenting possible security vulnerabilities. The way this works is similar to the way virus scanner vendors discover and publish new virus signatures. Typically, the manufacturers of intrusion detection systems and vulnerability scanners (which we'll get to a bit later) will do this kind of vulnerability research in order to differentiate their products.
Finally, a multitude of security researchers occasionally discover vulnerabilities and notify the vendors or publish them in public forums. There is growing controversy regarding exactly how new vulnerabilities are discovered, how these discoveries are motivated, how they are published and who benefits the most in the long run. But that's a topic for a paper all on its own. Suffice it to realize that, one way or another, new vulnerabilities are discovered and one way or another find their way into the public eye.
Now, the good and the bad guys have equal access to information about how systems could be hacked. What we want in the vulnerability discovery phase is to identify any of these known vulnerabilities so that those problems can be rectified before they are discovered by anyone else and perhaps used to compromise our systems.
How can we determine if the systems we are analyzing are subject to any of these known vulnerabilities? Well, recall the information that we've derived so far: for every visible machine we have the operating system, a list of the active services, and the type and version of those services. That's all we need to query a database of known vulnerabilities. Where does one find such a database? Why, the Internet of course!
There are a number of sites that openly publish databases of known vulnerabilities. Two that I like very much are SecurityFocus and SecuriTeam. My company has a similar database at http://www.hackrack.com. In fact, the Internet itself is a massive repository of information on known vulnerabilities. Simply use your favorite search engine (mine is Google) and search for <the operating system name>, <the service name>, <the relevant version number> and the word "exploit" and you're bound to stumble on what you're looking for. Try it - you'll be amazed.
A well performed discovery exercise and a good search may be all you need to perform a vulnerability discovery analysis, but it's a slow and difficult process that requires a high level of technical competence. On a large and complex network this approach is probably completely infeasible. Fortunately, help is at hand in the form of automated vulnerability scanners. Such scanning software can be found in three general forms:
Network-based scanners: These scanners have a knowledge base of tests that they then perform against the target systems over the network. Although these tests are run without any special privilege on the target system and are therefore less comprehensive, they do have a number of advantages: They do not have to be installed on each system being tested, and they more accurately reflect the possibilities open to an external attacker. In a sense, they identify the issues germane to system penetration from the outside, rather than internal configuration issues, and may better reflect the priorities of the security administrator.
There are a number of commercial products available in each of these categories. I'm loath to mention names, but amongst the leading vendors in this space one must include ISS, Axent (now Symantec), eEye, Next Generation Software, BindView and Foundstone. There are many other commercial products, but the list would never be complete without at least one freeware product - Nessus from Renaud Deraison. The open-source Nessus scanner can hold its ground against the best of the commercial scanners on just about every front, with the possible exception of support and reporting. Most freeware scanners concentrate on detecting only a specific type or category of vulnerabilities (like Web server vulnerabilities) and the list of names is almost endless.
All of these scanners will have access to a database of known vulnerabilities and will use a scanning engine to connect the target machine, execute a set of tests and report a list of vulnerabilities for each server tested. Scanners, however, often don't quite live up to the promise. There are two main issues that plague all vulnerability scanners, namely:
A new generation of scanning software uses somewhat more intelligent scanning techniques, and may help to reduce the dependence on knowledge of the attack being used. Retina is an example of a scanner claiming to belong to this group.
While intelligent scanning is an exciting development, it's my belief that the failings of automated scanners are symptomatic of fundamental flaws in the concept of vulnerability scanning. From this we should learn an important lesson: You can't plug out your brain when you plug in your scanner. Always remember that a scanner will never discover all possible vulnerabilities on a system and that a scanner will often report problems that do not exist. Moreover, a scanner will never fully understand the interdependencies between systems, the context computer systems exist in or the role that humans play in the operation of computer systems. Although vulnerability scanners are a valuable tool during this phase of our assessment every report must be carefully evaluated.
Next Up - Analyzing Web Application Security
Above and beyond the technical failings of security scanners, there's a large part of the potential vulnerability space that they hardly even begin to evaluate - custom Web applications. HTTP is possibly the most pervasive protocol on the Internet today and there's hardly an application that hasn't yet been written for the Web. Yet Web application security is a relatively new field and programmers often don't fully understand the security implications of the code that they write. Security problems with custom Web applications have become such a regular problem that our company has included a special phase in our assessment, just to look at this.
Charl van der Walt works for a South African company called SensePost that specializes in the provision internationally of information security services, including assessments of the kind discussed in this article. His background is in Computer Science, he is a qualified BS7799 Lead Auditor and he has been doing this kind of work for about five years now. he has a dog called Fish.
To read the next installment of this series, click here
Assessing Internet Security Risk, Part One: What is Risk Assessment?
This article originally appeared on SecurityFocus.com -- reproduction in whole or in part is not allowed without expressed written consent.