Deepfake videos — fake videos created or doctored by AI and machine learning (ML) to look like the real thing — have gone mainstream, including one purporting to show a drunken Nancy Pelosi. There’s been a lot of focus on their implications for politics, entertainment figures and national security. But enterprises need to fear them as well. Experts warn deepfakes could show a CEO announcing false bad news about her company, sinking the stock price and harming the brand. (Earlier this year, a deepfake was released of Mark Zuckerberg bragging about his control of billions of people's stolen data.) They could be used for extortion — creating a deepfake and threatening to release it unless a ransom was paid. And there are other dangers as well.
In this post, I’ll explain the damage they can do to enterprises, and offer advice on how companies can best combat them.
Andrew B. Gardner, Senior Technical Director and Head of AI/ML for Symantec’s Center for Advanced Machine Learning (CAML) makes no bones about how dangerous deepfakes are for enterprises and society. “Fake content like videos, photos, emails, transactions, etc., are an enormous risk to enterprises and society as a whole,” he says. “In my opinion, this is the most significant risk we must deal with in an AI/ML world: How do you make decisions when you don't know what is real?”
Experts warn deepfakes could show a CEO announcing false bad news about her company, sinking the stock price and harming the brand.
Enterprises need to worry not just about the obvious dangers, such as a faked video of a CEO saying things that damage the company, he warns. With deepfakes, he says, “You can fake an interaction with a company executive to dupe an employee into wiring money to a new ‘supplier.’ More insidiously, perhaps, you can induce users to exfiltrate innocuous business information, like documents, transaction details for a deal, a customer order, etc., which can then be leveraged for crime or fraud in more subtle, perhaps ongoing fashion.”
Jonathan Morgan, CEO of New Knowledge, which protects companies from online campaigns designed to damage their reputations, says that deepfakes can also be used in supercharged spearfishing campaigns. “Faked videos and audio distributed to employees could trick them into releasing or sharing logon credentials, which can then be used to gain access to an enterprise’s network,” he warns.
And Douglas Mapuranga, Chief Information Officer for the Infrastructure Development Bank of Zimbabwe and President of the non-profit ISACA information security organization’s chapter in Harare, adds that deepfake videos of CEOs could put them in legal danger: “If a deepfake has a corporate executive saying he or the company has violated the law, the officials could find themselves in serious legal jeopardy.”
How Real Is the Deepfake Danger?
Security experts say the dangers enterprises face from deepfakes are very real and will likely hit soon.
“We’re already seeing early signals that deepfakes will be used to spread political disinformation, and that means it’s only a matter of time before they’ll be used against enterprises,” Morgan says.
Symantec’s Gardner agrees. “There’s no natural barriers to entry,” he explains. “This all operates in a digital world and attacks are cheap to conduct…We are blind to the warning signs because the quality of the fake content is so believable, and our vigilance is low.”
How Enterprises can Protect Themselves
If deepfake attacks are inevitable, what can enterprises do to protect against them or mitigate attacks’ effects if they’re successful?
Mapuranga says company employees need to be educated about the dangers of deepfakes and how to detect them. In the long run, he hopes that a technological solution might be developed, for example, using blockchain as a way to authenticate video, audio and visual content. Along those lines, Symantec researchers Vijay Thaware and Niranjan Agnihotri have written a white paper, “AI Gone Rogue: Exterminating Deep Fakes Before They Cause Menace,” about how machine learning might be used to detect and thwart deepfakes.
But Morgan believes it will be impossible to detect and block deepfakes before they’re distributed.
“It’s not reasonable to expect enterprises to stop deepfakes from occurring because the only way they could do that would be not to release audio or video of a high-profile company executives that can be altered. That would make it very difficult for the company to communicate with the public and shareholders — and doing that is a natural part of business.”
In the long run, he hopes that a technological solution might be developed, for example, using blockchain as a way to authenticate video, audio and visual content.
Instead, he says, enterprises should focus on detecting a deepfake as early as possible after its release, and then mitigating its effects. It’s too difficult to automatically find and analyze the contents of a video and determine it’s a deepfake, he believes. So enterprises should look for “distribution patterns, including a network of social media accounts, that are used to distribute deepfakes.”
Doing that, he believes, will help find deepfakes quickly, and give an enterprise early warning so it can fight back. Once a deepfake is detected, he says, a team composed of “corporate communications, crisis communications, and public affairs groups inside an enterprise can quickly and preemptively counter the narrative that is being furthered by the deepfake. That dramatically reduces the likelihood that people are fooled by the fake content.”
Symantec’s Gardner agrees, but warns that being able to detect fake content is beyond the capabilities of most enterprises.
“Most enterprises should abandon any idea of detecting fake content on their own, and favor working with a partner,” he says. Identifying deepfakes “requires a lot expertise, continuously deployed, to maintain high-quality detection of fake content. That is simply too expensive for most enterprises to commit to on their own.”
The upshot of all this? Deepfakes are here to stay, and they may target your enterprise one day. So educate your workforce about them, find the right partner for detecting them and put together a team that can respond to deepfakes when they’re released.