More than 25 years ago, the worldwide web was likened to the wild west – a virtual frontier lined with black hats gunning to spread viruses that latched onto computers, erase their data and, even worse, shut them down for good.
Those dark days persist – and show little sign of brightening in the near future. Today, the daily news abounds with reports of hackers breaking into the seemingly impenetrable computer networks of government agencies, political organizations and banking, retail and hotel giants and making off with everything from confidential data to social security, credit card and passport information.
These security breaches are hardly surprising: as security programs have gotten increasingly more sophisticated, so too have cyber criminals, who are often at least one-step ahead of the risk management tools designed to thwart their malevolence. According to the most recent study by The Risk Institute at The Ohio State University Fisher College of Business, 28% of firms have been victims of a cyber attack, with the number far higher for financial companies, and while 33% of all firms remain vulnerable to a cyber attack, the proportion increases to 50% for those in the financial sector.
Enter artificial intelligence, which has gained momentum within the cyber security field as a mechanism not only for enhancing processes but mitigating risks, said Phil Renaud, executive director of The Risk Institute.
But AI is no panacea, according to experts.
As security programs have gotten increasingly more sophisticated, so too have cyber criminals.
“In the hands of cyber criminals, artificial intelligence is used to fashion more sophisticated, targeted spear phishing emails that can more readily avoid detection,” said Steve Weisman, a professor who focuses on white-collar crime, including cyber security, at Bentley University in Waltham, University.
In addition, AI technology requires constant human oversight – which taxes already overburdened security operations personnel – and it can lead to false positives. Plus, cyber hackers have deployed AI to infiltrate government and corporate networks.
While Symantec’s products have targeted AI against cyber threats for more than a decade, it continues to research and develop tools that can achieve more accurate results, said Leyla Bilge, a technical director who leads the firm’s European research labs. Its current research, said Bilge, involves “deep learning,” which utilizes AI-based algorithms to solve problems even in the face of a dissimilar, amorphous and interconnected data set.
While such AI-based algorithms are now better at understanding the data and recognizing critical associations that humans might miss, deep learning is far from foolproof.
“We can detect more things but sometimes, the algorithm will decide based on elements that don’t matter, such as extraneous information,” which can lead to false positive alerts, said Bilge. “So, at the end of the day, we still keep humans in the loop because we can’t rely on AI by itself.”
External Challenges Abound
Although many AI applications, predictions and detections are based on models created from data-sharing, the industry is prohibited from gathering such information in Europe, where strict laws protect the sharing, transferring and use of personnel data.
“Our hands are tied if you can’t collect data from people,” said Yufei Han, a Symantec senior principle researcher.
And even when data is readily available, attackers have the know-how to corrupt it, thereby reducing its quality and calling into question the validity of the model based on the shared data, he said.
In the article “How AI Advancements Are Affecting Security, Cyber Security and Hacking,” which ran on Technopedia’s site, writer Claudio Buttice noted that AI could potentially turn into an even more dangerous and powerful tool in the wrong hands.
“If the automation level,” Buttice mused, “brought by AI can increase the scale of their attacks, especially if they're able to recruit vast armies of machine-learning-powered bots, IoT botnets will be a much larger threat.”
Looking ahead, more companies are likely to utilize AI for security purposes in 2019.
As cyber villains become increasingly more adept at surreptitiously wending their way into computer networks, the security industry has significantly stepped up its focus on AI as a preventive measure against invasion. At several professional conferences this year, the program incorporated AI as a defensive strategy, not just as a detection tool with a warning alert, said Han.
But whether AI is positioned as a warning or defense tool, the business world has been slow to incorporate it as a security measure. Today, just 15% of enterprises are using AI, with top-performing companies deploying the technology for marketing purposes, according to Adobe’s 8th annual “Digital Trends” report.
Looking ahead, more companies are likely to utilize AI for security purposes in 2019. According to a report from The Risk Institute, 60% of risk managers believe AI will play a role in their firms’ risk management in the near future.
Some firms, like TransPerfect, a New York-headquartered global translation company, have begun evaluating the players, getting their references and testing out products. With the proliferation of start-up cyber security firms and products, as well as the hype surrounding AI, CTO Mark Hagerty refers to the firm’s efforts as “an educational process that takes a fair among of time.”
And as part of its due diligence, Hagerty said, IT needs to determine "which companies are going to make it -- since a new product involves an investment in training people on how to use it, as well as tailoring it for our specific needs."
If you found this information useful, you may also enjoy:
We encourage you to share your thoughts on your favorite social platform.