In spite of mounting evidence to the contrary, many of the enterprise security leaders with whom I speak seem to still be living in a world where the greatest threat to their enterprise was the “hacker” trying to breach their firewall. When the MSSP industry was born in the late 90’s, this may have been the case. The entire industry rose in part as a result of the development of intrusion detection (IDS) technologies. The earliest IDS sensors were signature-based and focused on, as the name implies, detecting intrusions to the network. Because many enterprises were ill-prepared to deal with the volume of alerts and high rate of false positives, Managed Security Service providers emerged to provide a means to validate and evaluate the alerts generated by these sensors. Over time, the IDS industry matured to incorporate technologies such as protocol anomaly detection, and the idea of enforcement, or intrusion prevention, was born. Again, many enterprises were reluctant to implement blocking technologies for fear of inadvertently creating a Denial of Service. By and large, however, IDS/IPS technology is still largely signature-based, and focused on what’s on the outside trying to get in.
As the threat landscape has changed, these perimeter-focused, signature-based defenses have become increasingly less effective. Today, the vast majority of critical security incidents we detect for our customers are due to observation of patterns of post-compromise activity, such as attempted communications to a command and control server from a host within the network. Regardless of whether that activity is blocked by a firewall or IPS – the communicating host is infected. Increasingly we find that many threats are discovered only when pieces of evidence from multiple technologies are considered together. Our SOC analysts believe that context is key to effective incident detection. This context is derived from the visibility we have into the global threat landscape, as well as analysis of log data from a wide variety of security devices. We call this approach “edge-to-endpoint protection”. Each component of the security infrastructure tells part of the story:
· Network firewalls – Traffic logs show attempts to connect to malicious hosts, or patterns of scanning related to reconnaissance activity or malware propagation.
· Web proxies and gateways – Record connections to known bad or suspicious URLs, or to fast-flux URLs that fit a pattern associated with particular strains of malware.
· Network intrusion detection/prevention and Web Application Firewalls – Still extremely effective at detecting known attacks.
· Host security – Many attacks get into the enterprise network when the true perimeter – the people who use the network – walk around the “perimeter defenses” and plug in an infected host, or inadvertently install malware when clicking on seemingly innocuous links to infected websites.
· Secondary indicators – Raw application and OS logs and authentication logs from network infrastructure can help increase the confidence in an analyst’s diagnosis.
Effective incident detection is only half the story. Determining the impact of an incident can help prioritize remediation efforts and help organizations move from reactive response to proactive protection. More on that in a future post.