At a Congressional Internet Caucus Academy briefing this week, experts argued that artificial intelligence’s impact on the 2024 election was less extreme than predicted — but deepfakes and misinformation still played a role.
In the run-up to the 2024 elections, there were major concerns that AI would disrupt the elections through false information; Overall, the impact was less extreme than experts had warned. However, AI still had an effect, as evidenced by deepfakes such as the Biden robocall and disinformation from AI-powered chatbots.
“We have not seen widespread use of AI tools to create deepfakes that would somehow influence the election,” said Jennifer Huddleston, senior fellow in technology policy at the Cato Institute.
And while the widespread AI-driven “apocalypse” predicted by some experts did not materialize, there was still a significant amount of misinformation. The Biden robocall was the most notable deepfake example in this election cycle. But as Tim Harper, senior policy analyst and project leader at the Center for Democracy and Technology, explained, there were several cases of AI misuse. These included fake websites generated by foreign governments and deepfakes that spread misinformation about candidates.
In addition to this type of disinformation, Harper highlighted that a major concern was how AI tools could be used to target people on a more micro level than previously seen, which he said did happen during this election cycle. Examples include AI-generated texts to Wisconsin students that were deemed intimidating, and incidents of non-English disinformation campaigns targeting Spanish-speaking voters intended to sow confusion. The role of AI in these elections has had an impact on public trust and perception of the truth, according to Harper.
According to Huddleston, a positive trend this year was that the existing information ecosystem helped combat AI-powered disinformation. With Biden’s Robocall, for example, there was a quick response, allowing voters to be better informed and critical about what to believe.
Huddleston said she believes it is too early to predict exactly how this technology will evolve and what public perception and acceptance of AI will look like. But she said using education as a policy tool can help improve understanding of AI risks and reduce misinformation.
Internet literacy is still developing, Harper said; he expects a similarly slow increase in AI knowledge and adoption: “I think public education about these types of threats is very important.”
AI REGULATIONS AND ELECTIONS
Although bipartisan legislation was introduced to combat AI-generated deepfakes, it was not passed before the election. However, other legal protections also exist.
Harper pointed to the Federal Communications Commission (FCC) ruling that the Telephone Consumer Protection Act (TCPA) regulates robocalls using artificially generated speech. This does apply to the Biden robocall, the perpetrators of which were held accountable.
Unfortunately, even in this case, regulatory gaps still remain. The TCPA does not apply to nonprofit organizations, religious institutions, or calls to landlines. Harper said the FCC is transparent about such “loopholes” and is working to close them.
Regarding legislation to combat AI risks, Huddleston said that in many cases some protection already exists, and she argued that the problem is not always the AI technology itself, but rather improper use. She said those regulating this technology should be careful not to unfairly condemn technology that could be useful, but consider whether the problems are new or whether they are existing problems where AI creates an extra layer.
There have been many states that have implemented their own AI legislation, and Huddleston warned that this “patchwork” of legislation could create barriers to the development and deployment of AI technologies.
Harper noted that there are legitimate First Amendment concerns about the overregulation of AI. He argued that more regulation is needed, but whether that will happen through agency-level regulation or new legislation remains to be seen.
To counter the lack of comprehensive federal legislation addressing the use of AI in elections, many private sector technology companies have sought to self-regulate. According to Huddleston, this is not only a result of government pressure, but also a result of consumer demand.
Huddleston noted that broad definitions of AI in the regulatory world could also inadvertently limit AI’s useful applications.
She explained that many of these are harmless applications, such as speech-to-text software and navigation platforms to find the best route between campaign events. Using AI for things like closed captioning can also increase capacity for resource-constrained campaigns.
AI can help identify potential cases where a campaign is being hacked, Huddleston said, allowing campaigns to be more proactive in the event of a security threat.
“It’s not just the campaigns that can benefit from certain applications of this technology,” Harper said, underscoring that election officials can use this technology to educate voters, inform planning, conduct post-election analysis and improve efficiency to increase.
While this briefing addressed the impact of AI on elections, there are still questions about the impact of elections on AI. It is important to note that the new administration’s platform included rescinding the Biden administration’s executive order on AI, Huddleston said, adding that it remains to be seen whether it will be rescinded and replaced or will be withdrawn without replacement.