Posted: 4 Min ReadExpert Perspectives

Expect a New Battle in Cyber Security: AI versus AI

The Rise of the Chatbots

If you want to understand how rapidly artificial intelligence is changing businesses, look no further than the rise of chatbots.

The software-based conversationalists can help sales people find specific data or charts, keep schedules for executives, or walk security analysts through the proper response to an incident. Companies cannot avoid them. Some 65 percent of information-technology departments currently support Siri, Cortana or Google Now somewhere in their organizations while 19% of organizations have deployed AI chatbots, with workplace adoption anticipated to grow to 57 percent by 2021.

Yet, chatbots — and AI technology, in general — also bring risks. Last year, in an attempt to bring crowd-sourced training to an AI experiment, Microsoft put its "Tay" chatbot online. Within 24 hours, a structured attack on Tay resulted in the bot shouting "HITLER DID NOTHING WRONG" and tweeting its support for genocide.

The attack presages the problems inherent in the new technology. Just as businesses begin to trust systems based on machine learning, attackers may co-opt the systems. Researchers are already focusing on ways to turn AI against its owners, from hacking chatbots to finding ways to hide attacks from pattern recognition algorithms.

Even the most human-like chatbot could be vulnerable to social engineering, and because chatbots are an interface to backend systems, such attacks could have real consequences. Without adequate safeguards, a chatbot with access to customer data, for example, could be tricked to parrot back the information.

"Fears exist that chatbots and the companies that created them could inadvertently eavesdrop on sensitive work-related conversations," said Peter Tsai, senior technology analyst at Spiceworks, a social network catering to IT professionals around the world.

"In addition, security researchers have also exposed vulnerabilities in digital assistants that hackers could exploit to launch unwanted commands or turn phones and computers into wiretapping devices," he said.

What goes for chatbots, goes double for AI in general.

Ongoing research into adversarial attacks on machine-learning and AI systems have produced a variety of vectors of attack and as AI and machine-learning systems become more generally adopted, the attacks will have increasingly dire consequences.

The study of adversarial attacks is not new. Researchers at the University of California at Berkeley have studied adversarial machine learning for more than a decade, but with renewed interest in automation and machine intelligence in almost every facet of business, such attacks are garnering more attention. In a paper published in August 2017, a group of researchers from New York University showed that neural networks and pre-trained machine-learning models — often used as a short cut to speed up research — are vulnerable to pollution by Trojan horse networks that perform as advertised, until specific images or examples are fed into the system.

Researchers from Pennsylvania State University, Google, the University of Wisconsin and the U.S. Army Research Lab were able to create images that would be misclassified by a machine learning system, without knowing the architecture nor gaining access to the training data. With just the ability to input images into the targeted neural network and observing the output, the researchers were able to find a modified image that would be misclassified more than 84 percent of the time.

"Adversarial examples … enable adversaries to manipulate system behaviors," the researchers stated in their paper. "Potential attacks include attempts to control the behavior of vehicles, have spam content identified as legitimate content, or have malware identified as legitimate software." That’s among reasons why companies like Symantec (https://arxiv.org/pdf/1703.00410.pdf ) and IBM (https://arxiv.org/pdf/1707.06728.pdf ) are finding ways to detect and defend against such adversarial machine learning.

Proof of Concept Attacks Already Here

Attackers are already finding ways to dodge detection by machine-learning systems and subvert their training.

Many cyber criminals, for example, create malware delivery systems that use a variety of methods — such as packers and encryption — to transform their malicious programs into digital files that cannot be recognized by anti-malware scanners.

Image transformations that can fool neural networks represent a similar method. While the best known example — causing an image recognition system to misclassify the object in a picture as an ostrich — seems innocuous, the attack resembles some of the common ways that attackers dodge today's antivirus defenses: circumventing the scanners by transforming a malicious program into binary code that seems legitimate.

"Proof of concept attacks exist, so it's more than theory, but not yet found prevalent in the wild," said Brian Witten, senior director of Symantec Research Labs. "The community is still improving the ability to detect adversarial AI, and that is a first crucial step."

Without the tools to detect such attacks, there is little that companies can do except be aware of the existence of attacks. In addition, business groups should inform the information security team when they adopt any machine-learning or AI system.

Almost all security departments — 85 percent — expect workers to involve them in the process of adopting chatbots and other AI systems, according to the recent Spiceworks' survey.

The trend toward greater adoption of AI systems is just beginning. While AI is currently effective at automating tasks where there are limited numbers of variables and possible outcomes, the software is not capable of extremely complex tasks, said Spiceworks' Tsai. Greater complexity may result in more mistakes and, possibly, the increased likelihood of vulnerabilities.

"A computer might be able to master a game like Chess, where there are clear rules, but … struggle in the real world, where anything goes," he said. "For example, with self-driving cars, we’ve already seen injuries when AI encountered situations it was not prepared for due to a complex set of rules."

For security companies adopting any of the plethora of machine-learning technologies, however, the existence of even proof-of-concept adversarial attacks means that they need to evaluate their own products for weaknesses to malicious training and potential backdoors.

Plurilock, a company focused on using nearly two dozen attributes to authenticate users, treats the threat as a real one.

"We look at how we would attack our own systems," said CEO Ian Paterson. "That is critical for any security company to do."

 The community is still improving the ability to detect adversarial AI, and that is a first crucial step. - Brian Witten, Symantec

About the Author

Robert Lemos

Journalist

Robert Lemos is an award-winning freelance journalist who has covered information security, cybercrime and technology's impact on society for two decades. He has covered cybercrime and security technology for almost two dozen publications.

Want to comment on this post?

We encourage you to share your thoughts on your favorite social platform.