Big Data = Better Decision Making = Better Security
When the words “big data” and “security” are used together in a sentence, usually the word “problem” or “concern” is in there too. Security is often thought of as a prohibitor to using big data, since data of all types – including confidential – are being mixed together to generate analytics which can be used for better decision making. But while there are concerns, big data can actually be harnessed to improve security.
The job of the defender is to protect against an infinite number of attacks. However, a defender will always have a limited amount of resources with which to do this. The real job of the security practitioner is to prioritize remediation efforts by risk, so that the limited resources can be focused on addressing the greatest risks to the business.
Most security organizations have anywhere between five and hundreds of different security technologies deployed in their enterprise environments. Many of these are involved in data collection, assessment, monitoring and alerting. These are often chosen independently of each other, so that an organization chooses the best fit for their situation and for the problem they’re trying to solve. The result is a disparate set of different technologies, from different vendors, with different reporting capabilities and formats. This results in the actualization of the same problem that the practitioner had in the first place, only with lots of data. Too much data. Information overload. How do they interpret the technical jargon, align what’s relevant, discard what’s irrelevant, and ultimately use that data to address the most important issues? Without effective analytics, so much time ends up being spent assessing, monitoring, and reporting, that there can be little resource left to do the most important security work of all – addressing risks.
SIEM solutions have begun to address this issue from a monitoring and alerting standpoint. In order to be most effective, these still require a tremendous amount of manual intervention on the part of the practitioner. Writing custom correlations and alerts, going through the data to identify what’s a false positive and what’s a legitimate alert. Throw in a situation where an organization must adhere to regulatory requirements and comply with mandates, and you’ve got yourself a big data problem.
What’s the solution? GRC vendors must build on platforms that are truly scalable and will produce useful analytics, automating as much as possible and leaving the human intervention to the decision making and the remediation tasks. Ideally, a practitioner would be able to send data from all sorts of different assessment technologies to a centralized data repository, and map that assessment data to controls which are mapped to security policies driven by best practices, frameworks, and compliance requirements. The system would handle assessment data for both technical and procedural controls, and provide the practitioner with an ability to automate risk scores based on technical severity AND business criticality. This is what is required in order to effectively prioritize and address security issues, and in this case big data will be a help.