WASHINGTON (AP) — The White House said Friday that Congress should ‘preempt state AI laws’ is perceived as too burdensomeoutlining a broad framework for how it wants Congress to address concerns about artificial intelligence without curbing growth or innovation in the sector.
The legislative blueprint outlines six guiding principles for lawmakers, focusing on protecting children, prevent electricity costs from risingrespecting intellectual property rights, preventing censorship and educating Americans about the use of the technology.
Republican leaders in the House of Representatives quickly endorsed the framework, saying they are willing to work “across the aisle” to pass legislation, but this would be a heavy burden that would require agreement with Senate Democrats as public divisions over AI run deep.
The announcement comes as state governments have moved forward with their own regulations on AI, while civil liberties and consumer rights groups lobby for more regulation of the powerful technology. The industry and the White House have pushed back, arguing that a patchwork of regulations would hurt growth. Trump signed an executive order in December to prevent states from drawing up their own regulations.
“This was in response to a growing patchwork of 50 different state regulatory regimes that threaten to stifle innovation and jeopardize America’s lead in the AI race,” White House AI czar David Sacks said in a social media post on Friday.
Sacks said the next step is to work with Congress to translate the administration’s principles into federal law.
AI legislation could need bipartisan support to pass
While it will be difficult to pass sweeping AI legislation, especially in a midterm election year, the framework seemed designed to appeal to some AI-conscious Republicans and Democrats with a focus on widespread and bipartisan concerns, such as the harms that AI chatbot companionship can negatively impact children and the electricity costs of AI infrastructure.
“It basically covers all the major sticking points that I think could prevent an AI bill from getting through Congress,” said Neil Chilson, a Republican former chief technologist at the Federal Trade Commission who now leads AI policy at the Abundance Institute. “It reads to me like an attempt to build a bigger tent, even if it doesn’t offer everyone everything they want.”
But it has already been panned by some Democrats, including U.S. Rep. Josh Gottheimer of New Jersey, who said in a statement that it “fails to address important issues, including strong accountability for AI companies, under the guise of protecting children, communities and creators. Americans need protection — but this means nothing if we allow the AI industry to be the Wild West.”
Whether AI legislation can pass both chambers of Congress may also depend heavily on support from Republicans such as U.S. Senator Marsha Blackburn of Tennessee, who has introduced her own AI bill, and last year played an important role in thwarting Trump’s previous attempt to block state governments from regulating AI. Blackburn on Friday called Trump’s framework a roadmap and welcomed the administration’s “important discussion” to get a bill passed.
States that already regulate AI do not want to be undermined
Several states – including California, Colorado, Texas and Utah – have already passed laws setting rules for AI in the private sector.
With bipartisan support in the Texas Legislature, a new AI law that took effect this year in the Republican-led state requires government agencies and health care providers to disclose when they use AI to communicate with consumers or answer questions. The law also prohibits the development of AI that causes someone to commit suicide, harm themselves, harm another person, or engage in criminal activity.
A federal law that follows Trump’s framework “could disable parts of Texas’ AI code, while leaving some parts in place,” said Saurabh Vishnubhakat, a professor at Yeshiva University’s Cardozo School of Law. “I don’t think the fact that it’s a Republican governor is going to save Texas law from preemption.”
Also vulnerable is Colorado’s law, which is intended to prevent AI from discriminating against people when making consequential decisions on things like hiring and medical care. It was 2024 is over but will not come into force until later this year. Lawmakers hope to be able to rework the regulations before then.
Jennifer Bacon, a Colorado Democrat, said voters don’t want to stifle innovation or fall behind China, “but our voters are interested in not becoming China.”
California’s Democratic Gov. Gavin Newsom has done just that vetoed some AI bills while signing others. His office criticized Trump’s framework on Friday.
“Once again, Donald Trump is trying to undermine California laws that protect our residents and protect consumers – a core responsibility of the state,” Newsom’s spokesperson Marissa Saldivar said in a statement.
The Trump administration says it does not think Congress should take over all state regulatory powers over AI, including enforcing blanket laws against AI developers, “to protect children, prevent fraud, and protect consumers.” It also says Congress should not interfere with local authorities in deciding where to locate data centers and other AI infrastructure, or how states should procure their own AI tools for law enforcement or education.
However, it says states “may not regulate the development of AI,” may not penalize AI developers for the unlawful conduct of a third party using their product, and “may not unduly burden Americans’ use of AI for activities that would be legal if conducted without AI.”
Trump’s AI proposal appeals to concerns about data centers and copyright
If opposition to data centers has increased along with rising energy prices, the White House had previously stated pressure on AI companies and the energy sector to do more to tackle the problem – including having AI companies sign voluntary pledges earlier this month to build their own power plants.
Some AI safety advocates are urging Blackburn and other influential Republicans to push for greater protections against AI’s most catastrophic risks to national security or the economy, such as out-of-control AI agents or the widespread replacement of human workers.
“We have companies that are explicitly hoping to replace human labor,” said Brendan Steinhauser, a former Republican strategist who now heads The Alliance for Secure AI and believes Trump’s framework doesn’t do enough to address the risks. “Tickling around the edges of further education and vocational training is just not going to have an impact on that. I just don’t think we as a country are taking this seriously enough.”
The framework aims to take a more balanced approach to another controversial topic: AI and copyright.
It advises against getting involved in the legal battle between artists and creators and the tech companies that have collected vast amounts of copyrighted works to build AI systems that can generate new text, images and sound.
The Trump administration “believes that training AI models on copyrighted material does not violate copyright laws,” the document says, but acknowledges that “arguments to the contrary exist and therefore supports allowing the courts to resolve this issue.”
That language was welcomed by trade group AI Progress – a coalition that includes Amazon, Anthropic, Google, Meta, Microsoft, Midjourney and OpenAI.
Technology companies have faced dozens of copyright infringement lawsuits by writers and publishers, visual artists, music record labels and others. Judges have largely sided with AI developers in allowing the “fair use” of copyrighted works to create something new, but some have questioned how the materials were obtained. A federal judge approved one in September $1.5 billion settlement between Anthropic and authors who claim nearly half a million books were pirated to train their chatbot.
—
O’Brien reported from Providence, Rhode Island. AP writers Colleen Slevin in Denver, Trân Nguyễn in Sacramento, California, and John Hanna in Topeka, Kansas, contributed to this report.
