WASHINGTON – With artificial intelligence at a pivotal point in its development, the federal government is poised to transition from an administration that prioritized AI safeguards to one more focused on eliminating red tape.
That’s a promising prospect for some investors, but creates uncertainty about the future of any guardrails on the technology, especially around the use of AI deepfakes in elections and political campaigns.
President-elect Donald Trump has pledged to rescind President Joe Biden’s sweeping AI executive order, which aimed to protect people’s rights and safety without stifling innovation. He didn’t specify what he would do instead, but the Republican National Committee’s platform, which he recently reformed, says AI development should be “rooted in free speech and human flourishing.”
It is an open question whether Congress, which will soon be completely controlled by Republicans, will be interested in passing AI-related legislation. Interviews with a dozen lawmakers and industry experts show continued interest in encouraging the technology’s use in national security and cracking down on non-consensual explicit images.
Still, the use of AI in elections and in spreading disinformation is likely to take a back seat as Republican lawmakers turn away from anything they see as potentially oppressive innovation or free speech.
“AI has incredible potential to increase human productivity and positively contribute to our economy,” said Rep. Jay Obernolte, a California Republican widely seen as a leader in the evolving technology. “We need to find an appropriate balance between putting in place a framework to prevent harmful things from happening while also enabling innovation.”
Artificial intelligence interests have been expecting sweeping federal legislation for years. But Congress, deadlocked on almost every issue, failed to pass a bill on artificial intelligence, producing only a series of proposals and reports.
Some lawmakers believe there is enough mutual interest in certain AI-related issues to get a bill passed.
“I see that there are Republicans who are very interested in this issue,” Democratic Sen. Gary Peters said, citing national security as an area of possible agreement. “I am confident that I will be able to work with them as I have in the past.”
It is still unclear to what extent Republicans want the federal government to intervene in the development of AI. Few showed interest before this year’s election in regulating how the Federal Election Commission or the Federal Communications Commission handled AI-generated content, fearing it would raise First Amendment issues at the same time the Trump campaign and other Republicans used the technology to create political ideas. memes.
The FCC was in the middle of a lengthy process of developing AI-related regulations when Trump became president. That work has since been halted under long-standing rules related to a change in administration.
Trump has expressed both interest and skepticism in artificial intelligence.
During a Fox Business interview earlier this year, he called the technology “very dangerous” and “so scary” because “there’s no real solution.” But his campaign and supporters also embraced more AI-generated imagery than their Democratic opponents. They often used them in social media posts that were not intended to mislead, but rather to further entrench Republican political views.
Elon Musk, Trump’s closest adviser and founder of several companies that rely on AI, has also shown a mix of concern and excitement about the technology depending on how it is applied.
Musk used X, the social media platform he owns, to promote AI-generated images and videos during the election. Employees at American for Responsible Innovation, a nonprofit organization focused on artificial intelligence, publicly pushed Trump to tap Musk as his top adviser on the technology.
“We think Elon has a fairly sophisticated assessment of both the opportunities and risks of advanced AI systems,” said Doug Calidas, a top executive at the group.
But Musk advising Trump on artificial intelligence worries others. Peters argued that this could undermine the president.
“It’s a concern,” the Michigan Democrat said. “If you have someone who has a strong financial interest in a particular technology, you have to take their advice and counsel with a grain of salt.”
In the run-up to the election, many AI experts raised concerns about an eleventh-hour deepfake – a lifelike AI image, video or audio clip – that would influence or confuse voters as they headed to the polls. Although these fears were never realized, AI still played a role in the election, says Vivian Schiller, executive director of Aspen Digital, part of the independent think tank Aspen Institute.
“I wouldn’t use the term I hear a lot of people use, which is the dog that didn’t bark,” she said of AI in the 2024 election. “It was there, just not in the way we expected.”
Campaigns used AI in algorithms to target messages to voters. AI-generated memes, while not lifelike enough to be mistaken for real, still felt true enough to deepen the divisions between the parties.
A political consultant imitated Joe Biden’s voice in robocalls that could have kept voters from coming to the polls during the New Hampshire primary if they hadn’t been caught quickly. And foreign actors used AI tools to create and automate fake online profiles and websites that spread disinformation to American audiences.
Even though AI ultimately had no impact on the election outcome, the technology has made political breakthroughs and contributed to an environment where American voters are unsure that what they see is true. That dynamic is part of the reason some in the AI industry want to see regulations that set guidelines.
“President Trump and people on his team have said they don’t want to stifle the technology and they want to support its development, so that’s welcome news,” said Craig Albright, top lobbyist and senior vice president at The Software Alliance. , a trade group that includes OpenAI, Oracle and IBM. “We believe that passing national laws to set the rules of the road will be good for developing markets for the technology.”
AI safety advocates made similar arguments at a recent meeting in San Francisco, according to Suresh Venkatasubramanian, director of the Center for Tech Responsibility at Brown University.
“By putting in literal guardrails, lanes and traffic rules, we were able to get cars that could roll much faster,” said Venkatasubramanian, a former Biden administration official who helped craft the White House principles for tackling AI.
Rob Weissman, co-chairman of the advocacy group Public Citizen, said he is not hopeful about the prospects for federal legislation and is concerned about Trump’s promise to revoke Biden’s executive order, which set an initial set of national standards for the industry created. His group has advocated for federal regulation of generative AI in elections.
“The safeguards are themselves ways to promote innovation so that we have AI that is useful, safe, does not exclude humans and promotes the technology in ways that serve the public interest,” he said.
Originally published: