OpenAI claims ChatGPT rejected over 250,000 requests to generate Dall-E images of candidates in the month before the US presidential election, as part of wider efforts to minimize interference.
This figure includes images of President-elect Trump, Vice President Harris, Vice President-elect Vance, President Biden, and Governor Walz.
ChatGPT also claims it directed 2 million people looking for answers toward traditional news sources on the day of the election itself, including the Associated Press and Reuters. In addition, the firm said it sent over one million people to CanIVote.org, a website that provides nonbiased advice on how to vote and related administrative issues, in the month leading up to the election.
“These guardrails are especially important in the context of an election and are a key part of our broader efforts to prevent our tools from being used for deceptive or harmful purposes,” OpenAI said in a blog post.
However, initiatives like the above haven’t stemmed the recent tide of senior AI safety executives leaving the firm. Lilian Weng, a VP of research at OpenAI, announced her departure in a post on X this week, after seven years with the company.
Weng joins co-founder and former chief scientist Ilya Sutskever and former head of AI safety Jan Leike, who both parted ways with the company in 2024. CTO Mira Murati left in September alongside Chief Research Scientist Bob McGrew and VP of Research Barret Zoph. The execs have jumped ship to competitors like Anthropic or announced plans for their own AI projects.
Recommended by Our Editors
Deepfake crackdowns, meanwhile, attracted serious attention from all corners, including Big Tech and state legislators, in the latter half of 2024. In September, YouTube confirmed it was working on at least two deepfake-detection tools to help creators find videos where AI-generated copies of their voices or faces are being used without proper consent.
In the same month, California Governor Gavin Newsom signed three bills aimed at limiting the spread of deepfakes on social media ahead of the election, including criminalizing the intentional spreading of AI-based content meant to influence elections. Newsom said in the announcement that “it’s critical that we ensure AI is not deployed to undermine the public’s trust through disinformation – especially in today’s fraught political climate.”
Get Our Best Stories!
Sign up for What’s New Now to get our top stories delivered to your inbox every morning.
This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.