The New York City mayoral election may be remembered for the remarkable win of a young democratic socialist, but it was also marked by something that is likely to permeate future elections: the use of AI-generated campaign videos.
Andrew Cuomo, who lost to Zohran Mamdani in last week’s election, took particular interest in sharing deepfake videos of his opponent, including one that saw the former governor accused of racism, in what is a developing area of electioneering.
AI has been used by campaigns before, particularly in using algorithms to target certain voters, and even, in some cases, to write policy proposals. But as AI software develops, it is increasingly being used to produce sometimes misleading photos and videos.
“I think what’s really broken through in this election cycle has been the use of generative AI to produce content that goes directly to voters,” said Alex Bores, a New York state representative who has been at the forefront of introducing laws to regulate the use of AI.
“So whether that was the Cuomo campaign that used ChatGPT to generate its housing plan, or Cuomo and many others making AI-generated video ads for voters, that is, I think, felt very new in the 2025 cycle, or certainly, just much further than we’ve ever seen before.”
Eric Adams, the incumbent mayor who dropped out of the race in September, used AI to create robocalls to New Yorkers featuring him speaking in Mandarin, Urdu and Yiddish, and also produced an AI video showing New York as an apparently war-torn dystopia to attack Mamdani.
Cuomo, meanwhile, was accused of racism and Islamophobia after his campaign tweeted a video that showed a fictionalized version of Mamdani eating rice with his fingers and a Black man shoplifting. The advert also featured a Black man, wearing a purple shirt and tie and a fur coat and carrying a silver cane, appearing to endorse sex trafficking. The Cuomo campaign later deleted it and said it had been sent out by accident.
Bores, who is running to represent New York in the House of Representatives, said many of the AI-generated ads in the last election cycle were “more likely” to “veer into what might be perceived as bigoted territory”.
“I think that’s another thing that we need to track: is this either because the algorithms are playing up stereotypes that are in their training data, or (is it) because it’s so easy to manipulate. You don’t have to tell an actor of a certain race to do a certain thing, you just change it in the computer,” Bores said.
“You don’t have to say to someone’s face to portray themselves in a certain way. Does that make it easier for people to put out content that, you know, really, I think polite society should be frowning upon.”
In New York state, campaigns are supposed to label AI ads as such, but some – including the ad Cuomo posted and deleted – did not. The New York board of elections is in charge of potentially pressing charges against campaigns, but Bores noted that campaigns might be willing to bite the bullet on any punishment, particularly if any punishment comes after a campaign has finished.
“I think you’re always going to find campaigns that are willing to take that trade-off. If they win and then they pay a fine afterward, they’re not going to care, and if they lose, it doesn’t matter,” Bores said. “So you want to try to find an enforcement regime that can take things down quickly before an election, as opposed to just punish afterwards.”
Robert Weissman, co-president of the non-profit advocacy group Public Citizen, which has been involved in passing many AI laws around the US, said that trying to fool people is now illegal in more than half the states, with campaigns required to post disclaimers on generative AI ads saying they are not real. Still, he said, regulating AI use in campaigns is a pressing issue.
“Lies have been part of politics since time immemorial. This is different than lies, and it’s different than saying your opponent said something that they didn’t say,” Weissman said.
“When someone is shown an apparently authentic version of a person saying something, it is very hard for that person to then contradict it and say ‘I never said that’ because you’re asking people to disbelieve what they saw with their own eyes.”
While AI is now capable of generating believable videos, some campaigns haven’t quite nailed it. A “Zohran Halloween special” video posted by Cuomo – this ad did state it was AI-generated – showed an extremely sloppy rendition of Mamdani, complete with out-of-sync audio and an incomprehensible script.
With the midterm elections approaching and the 2028 presidential election looming, AI-generated political videos are likely to stick around.
They’ve already been used at the national level. Elon Musk shared an AI-generated video of Kamala Harris in July 2024, after she became the de facto Democratic nominee for president. That video depicted Harris claiming she was the “ultimate diversity hire” and saying she doesn’t “know the first thing about running the country”.
While states may be making progress on regulating the use of AI in elections, there seems to be little appetite to do so at the federal level.
During the No Kings protests in October, Donald Trump shared an AI video that showed him flying a fighter jet and dropping brown fluid on Americans, just the latest of his AI video posts.
With Trump apparently approving of the medium, it seems unlikely that Republicans will attempt to rein in AI anytime soon.
