Even as authorities have hardened voting systems, the threat against the integrity of the nation’s elections are escalating on another front: disinformation.
Now the practice of sowing doubt, division, and instability has a new poster child – AI-generated "deep fake" videos that mark another and potentially even dangerous development in the continuing “weaponization” of digital media.
The klaxons are sounding – and from the highest levels of government.
One of the loudest just came from Sen. Ben Sasse, (R-Neb.), in an op-ed piece he wrote for the Washington Post.
"I spoke recently with one of the most senior U.S. intelligence officials," he wrote, "who told me that many leaders in his community think we’re on the verge of a deep-fakes ‘perfect storm,’” characterized by the confluence of easy-to-use technology, hostile foreign governments, and an American electorate bitterly at odds with itself.
Deep fakes are the outgrowth of branch of AI called "Generative Adversarial Networks."
It’s a concept that first emerged in 2014 in a paper credited to a young machine learning Ph.D. named Ian Goodfellow who now works for Google. (Another preeminent expert in the field, Yann LaCun, is Facebook’s chief AI scientist.)
While the math is complex, the concept isn't. It's based on two self-reliant machine-learning networks that work in an ongoing feedback loop. (It's also called “backpropagation” in computer science circles.)
One, called the generator, takes real video and alters it. The second, called a discriminator, tests it. The data they generate passes back and forth, as they use each other to hone their respective capabilities.
Metaphors to explain the process abound. How about this one: It's like art forger and forgery detective in a constantly evolving cat-and-mouse game – except the two opponents don't need to eat and sleep.
These “unsupervised learning” networks, of course, have great potential for positive purposes – improving self-driving cars, making our digital assistants more accurate, recognizing spam, enhancing low-resolution photos, and even turning text into images and vice versa to improve search.
As with all things related to computers, the more data available for input, the better the output -- one reason why early deep fakes featured the plentiful images of celebrities inserted into porn. In the most widespread demonstration of the technology's potential, the popular site Buzzfeed concocted a deep fake in which former President Barack Obama calls President Trump a "----head," using the almost pitch-perfect impersonation of comedian and movie director Jordan Peele.
Ever since, the foreboding drumbeat about deep fakes has been getting louder.
In July, Sen. Marco Rubio (R-Fla.) talked about deep fakes in a Heritage Foundation speech, noting that threatening the U.S. used to require nuclear warheads, carriers, and missiles.
But now, he said, all it takes is "the ability to produce a very realistic fake video that could undermine our elections, that could throw our country into tremendous crisis internally and weaken us deeply."
Later, in September, Facebook Chief Operating Officer Sheryl Sandberg faced questions about deep-fakes during a Senate hearing.
Never mind the domestic consequences. Could the international impacts be even worse?
Two law profs, Robert Chesney and Danielle K. Citron, think so, as they inferred in an article tellingly entitled "Disinformation on Steroids" in the Council of Foreign Affairs.
Imagine, they wrote, "deep fake videos depicting an Israeli soldier committing an atrocity against a Palestinian child, a European Commission official offering to end agricultural subsidies on the eve of an important trade negotiation, or a Rohingya leader advocating violence against security forces in Myanmar."
It's all proven serious enough to prompt federal research funding through DARPA. Under its new Media Forensics program, scientists such as Siwei Lyu, a professor at the University of Albany, are looking for ways to defeat deep-fakes. One way: Since deep fake videos are often concocted from still images, he’s using AI to look for subtleties like eyes that don’t blink.
By contrast, not everyone is wagering on Chicken Little.
In fact, some researchers are literally betting against any imminent impact from deep fakes. One, Tim Hwang, director of the Ethics and Governance of AI Initiative at the Harvard Berkman-Klein Center and the MIT Media Lab, is putting his money on the odds that deep fakes won't have any material impact on this election, for sure, contending they just aren't good enough to supplant other spurious bot-spawned Tweets and Facebook posts flooding social media.
Whether deep fakes threaten to create a real crisis sooner or later may be in question. But one thing is certain. They’re emerging at a time when America is more susceptible than ever.
As a result, it’s becoming more important to hone the public dialogue, whether it’s about election hacking or deep-fake disinformation. The reason is that both issues may be reinforcing each. Together, they may be responsible for the worst of all possible consequences – declining turnout and deepening distrust about election legitimacy.
Consider, for example, the most recent results from the annual Unisys Security Index survey: It found that 19% of Americans aren’t likely to vote in the midterms because they’re worried that outside actors will compromise the country's election voting systems.
"We need to be careful about how we talk about this threat," says Susan Hennessey, a Brookings Fellow in National Security in Governance Studies in a video on the Brookings’ site. "If we convince the American people that their elections do not have integrity, essentially what we're doing is accomplishing the bad guys' goal for them."
We encourage you to share your thoughts on your favorite social platform.