Endpoint Protection

 View Only

Assessing Internet Security Risk, Part 3: an Internet Assessment Methodology Continued 

Jul 30, 2002 02:00 AM

by Charl van der Walt

Assessing Internet Security Risk, Part Three: an Internet Assessment Methodology Continued
by Charl van der Walt
last updated July 30, 2002

This article is the third in a series that is designed to help readers to assess the risk that their Internet-connected systems are exposed to. In the first installment, we established the reasons for doing a technical risk assessment. In the second part, we started to discuss the methodology that we follow in performing this kind of assessment. In this installment, we will continue to discuss methodology, particularly visibility and vulnerability scanning.

Visibility

At the start of this phase we're armed with a list of IP addresses that serve as targets to be attacked or in our case, analyzed. Our approach to analyzing (or attacking) a machine will depend on what network services that machine is offering. For example, a machine could be a mail server, a Web server, a DNS server or all three. In each case, our approach for the next phases will vary somewhat. Moreover, our strategy will vary according to the operating system of the machine in question and the precise version of the services it offers. During this phase, our precise objectives can be described as follows:

  1. Determine what services are active on the target machine (i.e. what ports are open).
  2. Determine the type and version of the active services.
  3. Determine the Operating System on the target machine.

This information will allow us to plan and execute the remainder of the assessment.

It should be easy to see how we determine the active services on a machine. Once again we haul out our trusty port scanner. This time, we configure the scanner to test for all the ports and aim it at the machines, one by one. The port scanner will return a list of all the ports that are open on the machine. As services and applications generally use the same port numbers, we are able to derive a list of the active services on the host and thus an indication of the host's function.

Let's examine briefly how a port scanner actually works. To do this, we first have to revisit the TCP connection. A TCP connection is established by means of a three step handshake protocol, which can depicted as follows:

Client Sends a connection request packet (called a SYN) to indicate that it wishes to make a connection to a given port. SYN
Server If the server is able to handle the connection, it replies with a SYN packet of it's own, accompanied by an acknowledgement packet (call an ACK). SYN/ACK
Client On receiving the response from the server the client responds with an acknowledgement and the connection is established. ACK

This is exactly the process the port scanner needs to go through to establish whether a given port is open or not. The handshake is executed for each of the ports specified in the range. Whenever the handshake is successfully completed, the port is recorded as open. Notice that no data needs to be transferred and the client doesn't have to interact with the server at the application level. Once again, Nmap, is the probably the tool of choice.

You're probably tiring of bullet lists by now, but once again there are a few facts that need to be noted:

  1. The port scanner will only give you an indication of what ports are visible to you. There may be other ports that are actually open on the host, but are filtered by a router, a firewall or some other security device.
  2. Even on a limited number of hosts, a full port scan can take a very long time. Most services listen on standard ports that are generally known. To save time, the port scanner can be configured to only scan ports that are generally known. This has the advantage of making the scan far more efficient, but at the risk that unusual ports might be missed.
  3. As the port scanner only requires a server's SYN/ACK response to mark a port as open, you can never be 100% certain that there is a service actually functional at that port. There are situations where a server may respond to a TCP ACK request even though you aren't able to actually communicate with the service in question. To fully determine whether there really is a service active that can be communicated with, one probably needs to exercise some human intuition. But there is one more automated technique that can help us improve the odds - banner grabbing. Banner grabbing involves querying the service on a given port to request its type and version, and it is the next test we perform in this phase.

Apart from the TCP handshake, most networked applications have their own handshaking protocol that commences only after the TCP connection has already been established. Somewhere within the handshake there's usually a way to request the service's name and version. Indeed, some services display their version details right at the start, before the handshake commences.

Thus, we use a simple utility called a banner grabber. The grabber is programmed to know how to extract the version information from a number of common network services. We aim it at a list of addresses and it will pull the banner information for any services it recognizes. In this way we can also better understand whether the application we expect really is listening behind a given port. There a number of programs that will do banner grabbing for you. A number of port scanners do this (e.g. Super Scanner), as do several vulnerability scanners (like Nessus). You can also do it using a simple Telnet session, if you know the protocols well enough.

So, now we know what ports are open on a given host. We also know the type of application and the version that is listening behind each of these ports. In many cases this is all the information we need to identify the operating system. For example, the Telnet port (23) is open on a given IP. We use Telnet to connect to it:

 # telnet 196.3x.2x.7x Trying 196.3x.2x.7x... Connected to xxx.xx.co.za. Escape character is '^]'. HP-UX u46b00 B.10.20 A 9000/831 (ttyp1) login: 

Pretty obvious, huh? There are other, only slightly less obvious, clues also. For example, the IIS Web server only runs on Microsoft machines, Sendmail is probably a Unix mail server, and ports 264 and 265 are a dead give away for a CheckPoint Firewall-1.

In addition to the simple tricks described above, we can also use a sophisticated technique called "OS fingerprinting" to identify the operating system. For an excellent paper on the issue, consult www.insecure.org. (This is the home of Fyodor - author of the acclaimed Nmap port scanner.) Basically, OS fingerprinting is a bit like identifying someone's nationality from their accent. Many operating systems communicate on the Internet with some unique characteristics that make them distinguishable from other operating systems. Programs like Nmap and Queso and many of the commercial security scanners have databases of OS signatures against which the "accent" of a particular operating system can be compared. Some of the characteristics that are examined include unique combinations of open ports (ergo our example earlier), TCP initial sequence numbers, and responses to unexpected requests.

While considering the use of OS fingerprinting, the following caveat should be noted. This technology is guess work at best and many OSs share the same signature and are therefore indistinguishable over the network. Moreover, servers are often housed behind firewalls and other security devices that may mask the real OS. A good fair deal of common sense needs to be applied when interpreting fingerprinting results.

All the technical inadequacies notwithstanding, at the end of this phase we're armed with as much information about each target machine as we can possibly extract. With this information in hand, we can now plan and execute the rest of our analysis.

Vulnerability Scanning

Before we progress with the rest of the methodology, let's take a moment to revisit where we are in terms of the bigger picture. You'll remember the question that started the whole process: what are the threats that my Internet-connected systems face and what are the chances of those threats being realized?

We started our quest with a reconnaissance exercise, through which we attempted to identify all the Internet-facing infrastructure that is relevant to the target organization. We did this with a number of steps. Firstly, we used various different resources to build a list of all the DNS domains that might be relevant to the target. We then used a set of DNS tools to map those domains to single IP addresses, which we in turn expanded to complete subnets on the (slightly brash) assumption that the IP addresses of a single organization are most often grouped together in the same space of the Internet.

We then explored the fact that the existence of a DNS name-IP mapping does not necessarily prove that that IP is actually alive and active on the Internet. In order to narrow our vague list of subnets to a more accurate list of single IP addresses that are actually "active" on the Internet we ran a set of vitality tests - designed to determine as accurately as possible whether a given IP address can somehow be reached on the Internet. The vitality tests included core router queries, ping scans and TCP host scans.

Having built a list of IP addresses that are alive and well on the Internet, we set out to fingerprint those addresses - essentially to identify the operating system, the active services and the versions of those services for each address in our list. This information is the key input for our next phase - vulnerability discovery.

Before we can discover vulnerabilities on our targets, I guess we need to understand what a vulnerability actually is. My Oxford dictionary defines a vulnerability as an "exposure to danger or attack." Why don't we simply think of a vulnerability as a weak point on a host or a system that may allow some form of security compromise. Let's explore a few different ways of compromising a system:

  1. Available Channels: Often, everything an attacker needs to access your systems is right there in front of your face. The Internet, by its very nature, is verbose. Often a combination of verbose services provides enough information to compromise a system.
  2. Configuration Errors: Very often, technology is smarter than the people that use it. In the headlong rush to add differentiating features software vendors often add functionality that users and administrators don't fully understand. An attacker who understands a system better than the administrator may use configuration errors to gain some form of unauthorized access.
  3. Programmatic Errors: Computer programmers are just people, and like other people they sometimes make mistakes. Modern programs and operating systems are incredibly complex and are developed at frightening speeds. Sometimes programmers fail to cater for a specific scenario. An attacker who can identify that scenario can use it to force the program into a state that the programmer never catered for. In this uncontrolled state the program may cease to function correctly, offer up information that it shouldn't or even execute commands on the attackers behalf.
  4. Procedural Errors: Occasionally errors exist, not in the technology itself, but in the way the technology is used. These weak points are the targets of social engineering attacks where an attacker manipulates people and the way they think to obtain information or some other form of unauthorized access to a system.
  5. Proximity Errors: The Internet is a highly connected system. This extreme level of interdependency sometimes results in strong systems being compromised through their trust relationships with systems that are weak.

Amazingly, a huge number of these kinds of vulnerabilities have already been identified, publicly raised and discussed in some forum. There are a number of sources for new vulnerability information. The first is the vendors themselves. When a software vendor becomes aware of a security vulnerability in one of their products they create a patch, or design a work-around and then publish a notification informing their clients of the problem and how it can fixed.

Many security product vendors also have researchers dedicated to discovering and documenting possible security vulnerabilities. The way this works is similar to the way virus scanner vendors discover and publish new virus signatures. Typically, the manufacturers of intrusion detection systems and vulnerability scanners (which we'll get to a bit later) will do this kind of vulnerability research in order to differentiate their products.

Finally, a multitude of security researchers occasionally discover vulnerabilities and notify the vendors or publish them in public forums. There is growing controversy regarding exactly how new vulnerabilities are discovered, how these discoveries are motivated, how they are published and who benefits the most in the long run. But that's a topic for a paper all on its own. Suffice it to realize that, one way or another, new vulnerabilities are discovered and one way or another find their way into the public eye.

Now, the good and the bad guys have equal access to information about how systems could be hacked. What we want in the vulnerability discovery phase is to identify any of these known vulnerabilities so that those problems can be rectified before they are discovered by anyone else and perhaps used to compromise our systems.

How can we determine if the systems we are analyzing are subject to any of these known vulnerabilities? Well, recall the information that we've derived so far: for every visible machine we have the operating system, a list of the active services, and the type and version of those services. That's all we need to query a database of known vulnerabilities. Where does one find such a database? Why, the Internet of course!

There are a number of sites that openly publish databases of known vulnerabilities. Two that I like very much are SecurityFocus and SecuriTeam. My company has a similar database at http://www.hackrack.com. In fact, the Internet itself is a massive repository of information on known vulnerabilities. Simply use your favorite search engine (mine is Google) and search for <the operating system name>, <the service name>, <the relevant version number> and the word "exploit" and you're bound to stumble on what you're looking for. Try it - you'll be amazed.

A well performed discovery exercise and a good search may be all you need to perform a vulnerability discovery analysis, but it's a slow and difficult process that requires a high level of technical competence. On a large and complex network this approach is probably completely infeasible. Fortunately, help is at hand in the form of automated vulnerability scanners. Such scanning software can be found in three general forms:

  1. On host: Host-based scanners are physically installed on the machine being assessed. They work very much like automated best-practice checklists, running a number of tests to determine whether the machine has been installed and configured according to what is considered to be best practice for that operating system in that environment. Such scanners typically run as a privileged user on the target system and are thus able to perform a comprehensive set of tests.
  2.  
  3. Network-based scanners: These scanners have a knowledge base of tests that they then perform against the target systems over the network. Although these tests are run without any special privilege on the target system and are therefore less comprehensive, they do have a number of advantages: They do not have to be installed on each system being tested, and they more accurately reflect the possibilities open to an external attacker. In a sense, they identify the issues germane to system penetration from the outside, rather than internal configuration issues, and may better reflect the priorities of the security administrator.
  4. Application Scanners: Application scanners can be seen as specialized scanners that assess the security configuration of specific applications and services. Such scans may be performed on a host or over the network. Services like Web, database and NT domains that are difficult to configure and prone to security flaws can often be assessed in this way.

There are a number of commercial products available in each of these categories. I'm loath to mention names, but amongst the leading vendors in this space one must include ISS, Axent (now Symantec), eEye, Next Generation Software, BindView and Foundstone. There are many other commercial products, but the list would never be complete without at least one freeware product - Nessus from Renaud Deraison. The open-source Nessus scanner can hold its ground against the best of the commercial scanners on just about every front, with the possible exception of support and reporting. Most freeware scanners concentrate on detecting only a specific type or category of vulnerabilities (like Web server vulnerabilities) and the list of names is almost endless.

All of these scanners will have access to a database of known vulnerabilities and will use a scanning engine to connect the target machine, execute a set of tests and report a list of vulnerabilities for each server tested. Scanners, however, often don't quite live up to the promise. There are two main issues that plague all vulnerability scanners, namely:

  1. Such tools can usually only test for known security vulnerabilities. Their effectiveness depends to a great extent on the accuracy and timeliness of the source of vulnerability information. Sometimes this information cannot be captured in a format the scanners can understand or simply does not yet exist publicly.
  2. The test for a known vulnerability may cause a system to fail. Sometimes the only way to really determine remotely whether or not a system is vulnerable to some known weakness is to try and exploit that weakness and observe how the system behaves. This is an accurate form of testing but may have a detrimental effect on system operation. The alternative is to collect relevant information (like the service type and version) and make the decision on that basis. This Non-Intrusive approach, while safer, is much less accurate and often leads to a large number of "false positives" - vulnerability reports that are based on superficial information and prove to be wrong upon further investigation.

A new generation of scanning software uses somewhat more intelligent scanning techniques, and may help to reduce the dependence on knowledge of the attack being used. Retina is an example of a scanner claiming to belong to this group.

While intelligent scanning is an exciting development, it's my belief that the failings of automated scanners are symptomatic of fundamental flaws in the concept of vulnerability scanning. From this we should learn an important lesson: You can't plug out your brain when you plug in your scanner. Always remember that a scanner will never discover all possible vulnerabilities on a system and that a scanner will often report problems that do not exist. Moreover, a scanner will never fully understand the interdependencies between systems, the context computer systems exist in or the role that humans play in the operation of computer systems. Although vulnerability scanners are a valuable tool during this phase of our assessment every report must be carefully evaluated.

Next Up - Analyzing Web Application Security

Above and beyond the technical failings of security scanners, there's a large part of the potential vulnerability space that they hardly even begin to evaluate - custom Web applications. HTTP is possibly the most pervasive protocol on the Internet today and there's hardly an application that hasn't yet been written for the Web. Yet Web application security is a relatively new field and programmers often don't fully understand the security implications of the code that they write. Security problems with custom Web applications have become such a regular problem that our company has included a special phase in our assessment, just to look at this.

Charl van der Walt works for a South African company called SensePost that specializes in the provision internationally of information security services, including assessments of the kind discussed in this article. His background is in Computer Science, he is a qualified BS7799 Lead Auditor and he has been doing this kind of work for about five years now. he has a dog called Fish.

To read the next installment of this series, click here


This article originally appeared on SecurityFocus.com -- reproduction in whole or in part is not allowed without expressed written consent.

Statistics
0 Favorited
0 Views
0 Files
0 Shares
0 Downloads

Tags and Keywords

Related Entries and Links

No Related Resource entered.