Video Screencast Help

Proactively Managing Security Risk

Created: 07 Nov 2007 • Updated: 02 Nov 2010
Language Translations
Admin's picture
+1 1 Vote
Login to vote

by Naresh Verma, Yih Huang, and Arun Sood

The information technology revolution has changed the way business is transacted, governments operate, and national defense is conducted. Protection of these systems is essential and continuous efforts to protect them have resulted in exponential growth in reported security incidents. There are threats from hackers, spies, corporate raiders, terrorists, professional criminals, and vandals -- all of whom have a vested interest and well defined objectives for challenging the technology for financial and political gain, leading to damages to the enterprise infrastructure.

The current approach to security is based on perimeter defense and relies on firewalls, intrusion detection systems, and intrusion prevention systems. These approaches depend on a priori information. However, the increasing speed at which new exploits and attacks are being devised mandates a new layer of security defense for enterprise IT infrastructures -- a layer that provides consistent protection rather than perpetually lagging behind the morphing tricks of hackers. We propose such a new defense layer and a model that proactively manages server security risks and that co-exists with and complements the traditional security solutions.

Proactive security and exposure time as a metric

In this paper, we present a new approach to security risk management. The overall goal is to enhance the security of the national and corporate information infrastructure. For high levels of protection, the typical approach is to utilize a layered approach, often called "defense-in-depth." We propose the addition of a proactive security layer to the current security approaches.

To understand the necessity of this addition, consider one of the most popular security defenses -- intrusion detection and prevention by determining attack signatures. While the defense is effective after an attack is discovered and analyzed, it cannot be anticipatory and thus leaves the system vulnerable for a period of time. Such a defense reacts to the inventions of hackers. In contrast, a proactive security risk management system focuses on analysis of the corporate resources and the risk associated with it and develops plans to protect them. The resultant security coverage leaves no time gaps for it does not count on knowledge of attacks. In the end, foolproof security is impossible and cannot be guaranteed, even with the best firewall and intrusion prevention or detection systems; however, it begs the question: how much loss can an enterprise tolerate? In a military context, this leads to designing for survivability [1], and computer vendors treat this as a self preservation and business continuity issue [2]. Our faculty at George Mason University presents the Self Cleansing Intrusion Tolerance (SCIT) architecture [3], which reduces the losses by controlling the time a server is exposed to the Internet.

In our proactive security risk management, exposure time is the primary risk metric. We define exposure time as the time interval for which a server is exposed to the Internet. This metric has the advantage of being easily measured, repeatable, easy to understand and easy to relate to the potential damage resulting from an intrusion. We emphasize that this definition is not based on the detection of an intrusion, but is exclusively based on elapsed time. Certainly, servers with low exposure time provide fewer opportunities for the intruders to do damage. The proactive security risk management methodology described in this paper (Section 4) is driven by the need to assign an exposure time requirement to each risk associated with the servers in the system.

Current approach to enterprise security

Enterprise assets reside on servers: servers that provide access to the network (routers, firewalls, and intrusion prevention systems), servers to detect intrusion (intrusion detection system), servers to provide access to company information (role-based access control and fine grain authorization, file servers, email servers, etc.), servers to store critical data (database servers), and so on. There are challenges to managing security due to significant uncertainty in knowledge about the attacks, the ability to predict losses, and relying on traditional reactive approaches. Table 1 shows some of the security challenges that enterprises face and the approach taken by a few vendors to solve them.

Table 1: Enterprise security challenges

According to Symantec the number of security holes in servers continues to rise. There are many drivers that determine and specify the level of Enterprise Security. Some of these are technical; such as the desire for additional perimeter defense, achieving sustained availability and performance by avoiding denial of service attacks, avoiding intrusions to protect the corporate crown jewels, avoiding propagation of a worm within the corporate network, and satisfying corporate privacy policy. There are also corporate governance issues, like compliance with regulations and a fiduciary responsibility to national financial system. To achieve a high level of enterprise security several approaches are currently available.

Traditionally, approaches to security incidence management have been reactive. Information technology (IT) professionals feel tremendous pressure to complete their tasks quickly with as little inconvenience to users as possible. Over the years, a security risk management assessment approach has emerged [4] [5]. The driver behind this approach is the estimation of expected loss in the value of a specific asset when a specific threat is realized. The current approach is summarized in Figure 1 below, and is henceforth referred to as the traditional security risk management approach. For the proactive security risk management approach, we will use the traditional approach as our starting point.

Figure 1: Traditional security risk management approach

Once the specific threat has been identified, the computation of the Annual Loss Expectancy includes the following steps ([4, 5]):

  • Asset Value (AV) = hardware + software + data
  • Exposure Factor (EF) = percentage loss if a threat is successfully realized
  • Single Loss Expectancy (SLE) = AV X EF
  • Annual Rate Of Occurrence (ARO) = annual frequency on specific threats
  • Annual Loss Expectancy (ALE) = AV X EF X ARO

We stress that in this approach, Exposure Factor (EF) plays a key role. The assumption is that in the case of an intrusion, the asset is degraded by Single Loss Expectancy (SLE). In the next section we argue that by controlling the exposure time, we can reduce the effective EF and thus the expected loss.

In modern multi-tier architecture there is a potential for exposing additional assets to risks. For example, to model the secondary impact of a successful intrusion, Timm [6] introduces Cascade Threat Multiplier (CTM). He argues that after the attacker successfully intrudes the system, the attacker has the ability to access and damage other resources that are on the same network. Since exposure time reductions will reduce the time an intruder has to do damage, the intrusion tolerance approach is likely to provide additional advantage.

Proactive risk management approach

Enterprises are aware of the fact that risk cannot be completely eliminated and must be tolerated. The intrusion tolerance paradigm assumes that organization remains vulnerable to a certain extent and that attacks will happen and some will be successful. The main objective of a proactive approach is to limit the damage that can be done by the intruder and make sure that the system remains secure and operational. A proactive approach allows organizations to manage the security of their infrastructures and the business values that those infrastructures deliver.

In Table 2 we summarize the key differences between the proactive approach driven by the intrusion tolerance paradigm and the reactive approach driven by the prevention and detection paradigm.

Comparison of Risk Management Approaches
Issue Firewall, IDS, IPS Intrusion tolerance
Risk management Reactive Proactive
A priori information required Attack models; software vulnerabilities; reaction rules Exposure time selection; length of longest transaction
Protection approach Prevent all intrusions; impossible to achieve Limit losses
System Administrator workload High - manage reaction rules; manage false alarms Less - no false alarms generated
Design metric Unspecified Exposure time: Deterministic
Packet/data stream monitoring Required Not required
Higher traffic volume requires More computations Computation volume unchanged
Applying patches Must be applied immediately Can be planned

Table 2: Comparative analysis of proactive and reactive risk management

The exposure time (and thus the associated risk) is different for each server and can be shaped by factors like:

  • Longest transaction time,
  • Usage behavior patterns (user behavior),
  • The amount of time it takes for the server to boot and restore to a known state,
  • Total number of current active transactions on the system, and
  • Expected traffic on the servers.

The exposure time can also learn from interconnected enterprise servers. The value can be adjusted dynamically or assigned ranges that vary depending on server conditions, such as performance, number of processes running, CPU usage, power usage, physical memory, kernel memory, and commit charge.

The proactive model incorporates the impact of exposure time by augmenting the traditional methodology presented in the last section. A typical intrusion [6] goes through three phases: network reconnaissance, application reconnaissance and exploit attempt. Thus, by limiting the exposure time of the server, the time available for exploration and exploitation will be reduced. The reduced exposure time leads to reduced exposure factor, which in turn results in reduced expected loss. Exposure factor reduction is modulated on the basis of an S-curve and because of the shaping effect of this curve the output of this process is called risk shaped exposure factor (EFShaped). The steps in this computation are summarized in Figure 2.

Figure 2: Risk shaping

Figure 2 also captures the risk shaping in a matrix format. In this matrix the ET and EF values have been respectively normalized by ETMax and EFMax. Thus the entry in the 1,1 location is 1 and all other values are less than 1. The bottom right quadrant of the matrix shows the lower values of EFShaped. Typically, the shaded rows are the optimal set of ET values -- higher values of ET yield only limited advantage and lower ET values will have a higher implementation cost.

In summary, the EF from the traditional approach is treated at EFMax, and is modified based on the exposure time. The reduced exposure time leads to lower EFShaped.

Multi-tier example

In order to strengthen the idea of exposure time we take an example of a company that hosts and supports Web sites (both static and dynamic) for its clients with datacenters that are protected with extensive physical and electronic security measures to help prevent intrusions and to maintain Web site integrity. We make the case that if the exposure time of a server is reduced, the percentage loss due to a realized threat is reduced drastically.

According to the NIST guidelines (NIST Special Publication 800-44), such a company as mentioned above should have minimal exposure to vulnerabilities as a part of the planning and managing of Web servers. The NIST guidelines to a superior approach to securing public Web servers are shown below in Figure 3.

Figure 3: Secured enterprise architecture

Figure 3 illustrates a secured enterprise architecture having three primary domains: un-trusted, corporate trusted, and private. Each of these represents a high, medium, or low risk zone. Also, we believe that if an asset (say, a Web server) in an un-trusted domain is attacked and not mitigated in near time, the threat will cascade to other domains and will result in cumulative damage.

Table 3: Risk-shaped intrusion tolerance over traditional risk management approach

In Table 3 and Figure 4, we analyze the above scenario and provide a comparison of both traditional and proactive intrusion tolerance. In Table 3, we see that the traditional approach results in higher losses compared to our proactive intrusion tolerance approach due to the fine tuning of exposure time. We also conclude that the cumulative effect of a propagating threat does indeed result in higher losses and that once again exposure time plays an important role in reducing cumulative losses.

Figure 4: Risk-shaped intrusion tolerance over traditional risk management approach

Figure 4 also provides a roadmap to shaping enterprise security objectives. These objectives will play a critical role in shaping risks and threat modeling. The balance between greater security and higher availability is achieved through selecting an exposure time that meets enterprise objectives. Lower exposure time for a particular asset results in higher security and huge reduction in losses due to a potential threat, though at a cost of lower throughput. Higher exposure times within the proactive intrusion tolerance approach provides lesser security, but still results in reduced losses and higher throughputs. Thus, proactive intrusion tolerance provides a risk-shaping tool at every level of the enterprise security architecture that is highly adaptive to enterprise objectives and therefore achieves higher security levels and reductions in losses for both single as well as cumulative effects. In Table 5, we study the potential savings achieved by using the intrusion tolerance approach for data centers with between 1 to 50 servers. For example, a data center with 50 servers can achieve a savings of about $1 million (50% savings) by utilizing the intrusion tolerance approach.

Table 4: Reduction in Losses due to Intrusion Tolerance

The proactive approach has additional benefits. To illustrate this process we consider the data center patch management system. The reactive systems decisions are based on content analysis of the incoming or outgoing packets. This requires the updating of signatures and installation of patches because these are provided by the vendors. The arrival of patches disrupts the planning and scheduling of the work in the data center. On the other hand, the proactive systems are less susceptible to the packet content and the proactive system parameters, especially the exposure time, since they are set at the time of installation. Since the proactive approach gives the senior management better assessment and control of the risk, the proactive approach provides the data center manager more control over the scheduling of the patches.


In this paper we have discussed the current reactive approaches and propose a new proactive risk management model based on exposure time specification that adds a new layer of security, thus creating a robust risk management approach.

In our view, reactive and proactive systems must co-exist. Parameter selection based on a holistic view of the security system can reduce the capital cost as well as the operations cost. We anticipate that the proactive approach will provide an upper bound on the losses and thus the data center manager has more freedom in scheduling unexpected events like patch installation. This freedom will yield lower operation costs.


  1. Foundations of Intrusion Tolerant Systems, Edited by J. Lala, IEEE Computer Society Press, 2003.
  2. G. Brunette, Toward Systemically Secure IT Architectures,, 2006.
  3. SCIT: Self Cleansing Intrusion Tolerance,, last updated 2007.
  4. Microsoft Solutions for security and Compliance, Securing Windows 2000 server, 2006.
  5. D. Kinn and K. Timm, Justifying the Expense of IDS, Part One: An Overview of ROIs for IDS,, 2002.
  6. K. Timm, Justifying the Expense of IDS, Part Two: Calculating ROI for IDS,, 2002.

This article originally appeared on -- reproduction in whole or in part is not allowed without expressed written consent.