Video Screencast Help
Symantec to Separate Into Two Focused, Industry-Leading Technology Companies. Learn more.
Cyber Security Services

Lessons from CyberWar

Created: 15 Mar 2013 • Updated: 15 Mar 2013
franklin-witter's picture
+4 4 Votes
Login to vote

Last month, Symantec hosted its 2nd annual internal CyberWar Games and I had the privilege of joining Efrain Ortiz, Ben Frazier, and JR Wikes as part of team Avengers. For five days, we worked on limited sleep, grinding our way through the process of hacking systems and applications to capture flags and rack up enough points to secure our team a spot in the finals. Along the way, I made a couple of observations that I thought would be worth passing alone.
 
Lesson #1: Vulnerability Scanners LIE!!!
 
…or at least they don’t always tell the full story. If we had believed the results we got back from the vulnerability scans we ran against the systems in the CyberWar environment, we would not have made it very far. You see, our scans showed that there were no “Critical” or high-risk vulnerabilities present on the systems scanned and there were no useful “Medium” or “Low” vulnerabilities. What was interesting is that in some cases, the scan results were accurate – no directly exploitable vulnerabilities existed. In other cases, however, easily exploited vulnerabilities in running services were completely missed by vulnerability scans.
 
If we were the security team responsible for the defense of these systems and relied on vulnerability scans to tell us if we were protected or not, we would have had a very, very false sense of security. Where the automated vulnerability scanners failed, human intelligence and creativity found ways to gain access to sensitive information and, in some cases, full remote control of these systems. The vulnerability scanners completely missed MAJOR application security flaws that allowed us to access and manipulate critical data. They also missed weaknesses in listening APIs and service configurations that allowed us to work our way into the systems.
 
The lessons learned is that there is (at least for now) no substitute for human intelligence when it comes to vulnerability assessment and penetration testing and that applications cannot be ignored when assessing vulnerabilities. While automated vulnerability scans do serve a very useful purpose, they should be augmented with full-on human penetration testing to ensure proper depth of coverage and accuracy of reporting.
 
Lesson #2: You Need Diverse Skills on Your Penetration Testing Team
 
I don’t think that anyone on team Avengers would claim to be an expert in all aspects of penetration testing and system/application exploitation. In fact, we all have our strengths and weaknesses. What made us successful in the preliminary round was teamwork. 
 
When one of us would get stuck on a particular flag, the level of collaboration and creativity shown by the team was simply amazing. By pooling our collective knowledge and skills, we were able to overcome obstacles that no individual alone would have figured out (at least in the time we had allotted).
 
While there are a few unique individuals who have a mastery of all aspects of penetration testing, they are not the norm. When you are putting together your penetration testing teams, you should make sure that you have a diverse set of collective skillsets represented by the team. Expertise in application security, cryptography, malware analysis, system/network engineering, protocol analysis, and forensics (just to name a few) should be represented.
 
In addition to the diverse set of skills needed, you should also provide a system for collaboration and sharing of information. This will make it easy to engage the full knowledge and skills of the larger team even when they are not directly working on a given set of tests.