Posted: 4 Min ReadFeature Stories

Can Cognitive Tools Succeed Where Humans Have Failed?

Security field sees major boon coming thanks to advances in AI, machine learning, and advanced behavioral analytics

Here’s a fact: Current IT protection and prevention don't even come close to addressing all the cyber threats that corporations, governments, and utilities face every single day. We put a defense in place and trust that it will sniff out a breach and, when one takes place, mitigate the damage a malicious intruder can inflict.

But with millions of lines of code running on some organization’s systems, and with sometimes hundreds of workers in an organization opening and closing files and clicking through untold web sites, chances for a break-in are astronomical.

According to German IT security research institute AV-Test GmbH, 121.6 million new malware programs were discovered last year. The overall total this year stands at 839.2 million, with 11 million new programs discovered just last month.

“Human analysis is very limited. We quickly get overwhelmed,” says Leyla Bilge, a member of the Symantec Research Labs whose team studies the future use of artificial intelligence in blocking attacks. “AI on the other hand can handle millions of calculations in a second. It can identify malicious activity that humans miss.”

The good news is that advances in AI, machine learning, and advanced behavioral analytics may change the equation in security’s favor.

These cognitive tools are being deployed to scan and catalogue millions of known malware files in order to identify similarities that can help it identify new risks, so-called zero-day malware, before they happen. Trained algorithms are learning the signature characteristics of hackers themselves to stop their illicit entry into systems. And algorithms are learning the behavior of in-house users to help detect an intruder.

All of these tools leverage AI’s signature strengths. It can be taught to recognize millions of facts, identify visual patterns, and make decisions. In the case of anticipating new malware files, engineers can teach AI to recognize known characteristics of previous malware files, such as size, content, and coding. When a user clicks on a suspect file, the AI can then instantaneously compare it to its database of malicious code and create an alert if it detects a threat.

The Search for “True Intelligence”

But such smart layers face two dumb problems. They can slow the in-house work flow and they’re not always right. They can create false positives, identifying harmless files as threats. And if you’re a big organization, facing hundreds of threats a day, the resulting logjam from looking into each new one can overwhelm your staff.

And then there’s the problem of bad code versus bad actors. A large enterprise with its own proprietary software could be running tens of millions of lines of code. And that code, constantly revised and edited by in-house code writers, could be as flawed as the humans creating it. By some estimates, new software programs can launch with up to as much as 40 percent of useless or defective coding.

And current iterations of AI are notoriously easy to either trick or confuse, says Bilge. For instance, an artificial intelligence can be taught to recognize a cat from a dog after it is fed hundreds of images of cats and dogs. “But if even one pixel is out of place,” says Bilge, ‘it gets confused. That’s does happen to a human.”

That’s where machine learning—where the software doesn’t need to be fed data but instead uses statistical techniques to go looking for it and learning from it— comes in. Automated learning is the goal of advanced behavioral analytics. Companies have used behavioral analytics for decades to learn about consumer behavior and trends and baseline statistics to help market products and even tailor their offerings to a particular demographic or even a single customer.

By applying statistical learning to an enterprise’s three main areas of security concern—the network, the user, and the enterprises assets—the software can identify baseline behaviors and sniff out anomalies in any, or all three, areas. For instance, the AI can learn how often an asset like a file or program is used, by whom in the company and how often, and what devices it communicates with. As anomalies are flagged and looked into, the system reinforces its own learning.

Automated learning is the goal of advanced behavioral analytics.

But even that learning, says Bilge, does not come close to true intelligence because computers are (so far) incapable of actual reasoning or intuition.

“AI is very powerful in some areas, like image detection or signal processing,” says Bilge. “But in the security domain it is still very weak.” And that’s because we don’t even truly know how the human brain works.

“That’s one of the problems that we must solve,” says Bilge. “We don’t know how intuition and reasoning forms. We can forecast how AI will work in the future, but things advance way further than what we can now understand. It will be something we can’t imagine.”

And while most hackers are not very sophisticated, some few are, says Bilge, and are likely trying to tackle the AI problem themselves, and turn it into their own weapon. But most, she says, are just looking for vulnerabilities in a system. And those are far too easy to find, thanks to plain old (and fallible) human thinking.

When Bilge’s team was recently tasked with looking for flaws in the infrastructure at a major electrical utility, and particularly at their protocol ID anomalies, they didn’t have to look far. 

“They were using a password like 123456 and everyone had it both inside and outside the utility,” she says. “Most place are already insecure like that. You don’t need super-sophisticated malware to get in.”

And in cases like that, you don’t need a machine brain to stop them either.

Symantec Enterprise Blogs
You might also enjoy
2 Min Read

How Machine Learning Gains New Insights from Network Logs

There is simply too much data for human analysts to work through and this is where machine learning-based log processing is uncovering hidden threats

About the Author

P.K. Gray

Journalist

P.K. Gray is a freelance technology writer covering the security and energy industries.

Want to comment on this post?

We encourage you to share your thoughts on your favorite social platform.