This was the year when artificial intelligence was expected to wreak havoc on the elections.
For two years, experts from DC to Silicon Valley warned that rapid advances in technology would fuel disinformation, propaganda and hate speech. They feared that this could undermine the democratic process and possibly distort the outcome of the presidential election.
Subscribe to The Post Most newsletter for the most important and interesting stories from The Washington Post.
Those worst fears have not been realized – but there have been other fears too. AI seems to have done less to shape the way people voted, and much more to undermine their faith in reality. The new tool of partisan propaganda amplified satire, false political narratives and hate speech to entrench partisan beliefs rather than change minds, according to interviews and data from disinformation analysts and AI experts.
In a report shared with The Washington Post ahead of publication Saturday, researchers at the Institute for Strategic Dialogue (ISD) found that the rapid increase in AI-generated content has created “a fundamentally polluted information ecosystem” in which voters increasingly have difficulty distinguishing what is artificial from what is real.
“Has AI changed the elections? No,” said Hany Farid, a professor at the University of California, Berkeley who studies digital propaganda and disinformation. “But as a society we now live in an alternative reality. … We don’t agree on whether two plus two equals four.”
Social media platform
As Trump prepares to take power, experts say AI, especially on AI-generated fakes will likely help influencers spread false stories on loosely regulated social media platforms and reinforce the partisan beliefs of millions of people.
“This is the script,” Farid said. “If you don’t like something, just lie and let it be reinforced.”
X did not respond to a request for comment.
Deepfakes emerged early in the election cycle, most notably when President Joe Biden’s vote was spoofed in January to discourage New Hampshire voters from voting during the state’s primaries. The Democratic operative behind it, who claimed he wanted to raise awareness about the dangers of AI, was fined $6 million by the Federal Communications Commission, which cited violations of telecommunications regulations.
In July, Musk shared on According to X’s public statistics, the post has been viewed more than 100 million times and appears on the platform without a label or fact-check.
Cartoon-like AI images portrayed Trump in Nazi garb and Harris in sexually suggestive and racially offensive ways. In March, the BBC unearthed dozens of AI-generated fake photos of black people supporting Trump, a voting demographic being courted by both campaigns.
While more than a dozen states have laws punishing people who use AI to create misleading videos about politicians, such content has gone largely unmoderated, exposing gaps in how those laws and social media policies are enforced. The suite of software developed to debunk AI deepfakes fell short of its promise, leaving a haphazard system of fact-checkers and mainstream media researchers to flag fake images and audio as they spread through social media.
Foreign influence actors used AI until the closing hours of the election, spreading baseless allegations of voter fraud in battleground states like Arizona and spreading false images of world leaders like Ukrainian President Volodymyr Zelensky urging people to vote for Harris, according to misinformation tracking organization NewsGuard.
However, despite the prevalence of AI, there was no evidence that malicious activity had a “material impact” on the voting process, said Jen Easterly, director of the Cybersecurity and Infrastructure Security Agency, the federal government’s lead agency for election infrastructure security , in a statement. Wednesday.
Researchers have identified only a handful of cases in which AI was used to generate disinformation about the voting process, Kate Starbird, co-founder of the University of Washington’s Center for an Informed Public, said in a media briefing on Wednesday. “For the most part, the rumors we see are usually based on misinterpretations of real evidence rather than fabricated evidence,” she said. “That held up on Election Day.”
This reflects trends in elections abroad. AI had no impact on elections in Britain and the European Union, according to a September research report from the Alan Turing Institute, the UK’s national center for data science and artificial intelligence. Researchers have found just 16 confirmed viral cases of AI disinformation, or deepfakes, during the British general election, they say. Only eleven viral cases were identified in the combined EU and French elections.
“There is still no evidence that AI has influenced the outcome of elections,” said Sam Stockwell, lead author of the report and research associate at the Alan Turing Institute. “But we remain concerned about the continued erosion of trust in what is real and fake in our online spaces.”
Likewise, just because AI didn’t produce an “October surprise” that would change the election doesn’t mean it didn’t have an impact on American voters.
Far-right actors created a flood of AI-generated disinformation about response to these problems was made more difficult. the disaster. Shortly after Trump was elected, Farid said, deepfake audio of the president-elect’s voice surfaced in which he falsely claimed he would kill Canadian Prime Minister Justin Trudeau.
In the future, Farid said, to go.
“The ability to create AI bots, large language models, images, audio and video to support all this nonsense is absolutely poisoning the minds of people who get most of their information from social media,” Farid said. “It’s becoming easier to lie to 350 million people in a country where it shouldn’t be so easy.”
Trump and his allies turned to AI at various points in the cycle, sometimes leading to backlash — though it’s unclear whether the effort ultimately helped or hurt his campaign.
In August, he shared AI-generated images of Taylor Swift fans appearing to support him — a move Swift said prompted her to publicly support Harris. That same month, the Republican candidate falsely claimed that photos of Harris greeting a large crowd at a rally in Detroit were AI-generated. That many followers believed him, experts say, is an example of a phenomenon known as the “liar’s dividend,” in which public awareness of the possibility of AI counterfeits allows dishonest actors to question truthful claims or real images .
In an analysis of 582 political falsifications that emerged during the presidential election, researchers at Purdue University found that 33 percent were about Trump, while roughly 16 percent focused on Harris and 16 percent on Biden. These included content that portrayed the candidates in both a positive and negative light.
About 40 percent of these AI fakes were shared for satirical reasons, according to data from Purdue University researcher Christina Walker. About 35 percent was shared by “random users” who had minimal social media followers, while roughly 8 percent was shared by figures who had more than 100,000 followers on the social media platform where they shared it.
These AI-generated memes allowed people to join in with popular current events and fads — like Trump supporters seeing a prime example of government overreach in the case of “Peanut the Squirrel,” an alleged illegal pet found just before Election Day was seized and euthanized — to foster a sense of community and shared identity, says Kaylyn Jackson Schiff, an assistant professor of political science at Purdue University.
AI fakes help voters “develop a positive attitude or understanding of the current events surrounding the deepfakes they share, even if they don’t think the image itself is real,” she said.
But one of AI’s most lasting harms is in muddying the waters of reality, experts say, causing people to question more broadly what is true.
ISD researchers collected more than a million social media posts about AI and the election on They found that platforms regularly fail to label or remove AI-generated content, even when it has been debunked. They also found that users who claimed whether a particular piece of content was AI-generated were wrong 52 percent of the time.
Interestingly, the researchers found that users were far more likely to view authentic content as AI-generated than the other way around.
While platforms have tried to strengthen their processes to detect false content, says ISD director of technology and society Isabelle Frances-Wright, “what we’re really seeing now is a crisis when content is true.”
Many of the misjudgments were based on outdated assumptions about how to recognize AI-generated content, with users often overestimating their ability to do so, ISD found. AI detection tools didn’t seem to help, with some widely viewed posts using or misrepresenting these tools to draw false conclusions.
Furthermore, many of the reports on the use of AI in the elections (44.8 percent) suggested that the other side commonly used AI and therefore none of the statements could be trusted. Some users were concerned that AI was making them distrust just about everything.
Sowing that distrust, which people apply in accordance with their own beliefs, is an important impact of AI, says Frances-Wright. “It just gives people one more mechanism they can use to further entrench their own confirmation biases.”
Related content
Trump allies push to punish Jack Smith in first test of retaliation pledge
Beatles vs. Beyoncé? The Grammys are talking trash again.
The Trump coalition marks a transformed Republican Party