Implementation Decree ‘Provide a national policy framework for artificial intelligence’
On December 11, 2025, President Donald J. Trump signed the Executive Order (the “EO”) entitled “Ensuring a National Policy Framework for Artificial Intelligence.”
Aimed at establishing a unified national policy framework for AI, the EO seeks to significantly restrict states from independently regulating AI in “heavy and excessive” ways or that conflict with federal priorities, including U.S. AI innovation, leadership, and global dominance. The stated goal of the EO is to reduce “burdensome” government regulation that “hinders innovation.”
Purposiveness of state laws
The EO’s main goal is to prevent a patchwork of state laws, which could introduce overlapping, incongruous, or burdensome compliance requirements that could slow AI innovation, hinder U.S. competitiveness, and impose costs that could be particularly challenging for start-up companies. According to the president, these state-level requirements place American companies at a disadvantage compared to their international competitors.
Through the EO, the government also plans to challenge the legality of state laws on several bases:
- The EO targets state laws that arguably “require entities to embed ideological biases into (AI) models.” As an example, the EO specifically references the Colorado AI Act, which prohibits “algorithmic discrimination,” or any circumstance in which the use of an AI system results in unlawful differential treatment or impact based on an individual’s protected status.
- The EO appears poised to target state-level AI regulation that arguably extends beyond state borders in ways that potentially infringe on interstate commerce.
Artificial Intelligence Litigation Task Force
The EO directs the U.S. Attorney General to form an “AI Litigation Task Force” to challenge state AI laws that are inconsistent with the goal of “United States global AI dominance.” Further, the EO calls on the Secretary of Commerce to identify, by March 11, 2026, potentially unconstitutional AI laws and other state regulations to be considered for challenge by the AI Litigation Task Force. States identified as having burdensome laws may also be ineligible for federal funding for broadband access and deployment.
Exceptions to exclusion
In particular, the EO states that the resulting framework must protect children, prevent censorship, respect copyrights and protect communities. Specifically, the EO provides that legislative recommendations may not include proposals to pre-empt state AI laws relating to child protection, AI computing and data center infrastructure, state government AI procurement, and “other topics as may be determined.” This delineated language suggests the potential for further negotiations over where the federal AI framework should govern versus state AI regulation.
Considerations for healthcare and life sciences companies
While this EO aims to shift policymaking from states to the federal government, healthcare and life sciences companies developing or implementing AI should continue to develop AI governance, risk management, and contractual approaches to ensure proper compliance with existing federal and state laws.
Next steps for employers
In light of this EO, employers must continue to:
1. Develop comprehensive AI governance programs that comply with existing state AI laws, anti-discrimination statutes, and industry-specific regulations
Employers who have invested in robust governance frameworks – including algorithmic impact and risk assessments, transparency protocols, and bias testing – will be better positioned to defend against potential lawsuits, as states, advocacy groups, and trade associations are likely to challenge the EO, and individual plaintiffs continue to pursue claims under applicable anti-discrimination statutes.
Good governance measures – including those that follow NIST, CISA and similar federal guidelines – not only increase compliance with current state and local AI requirements that remain in full force and effect (many of which expressly reference federal compliance standards), but also demonstrate good faith efforts to prevent discriminatory outcomes, which remains a legal obligation under long-standing federal and state labor laws, as well as civil rights laws.
2. Conduct internal workplace AI audits and assessments
Employers should conduct regular audits of all AI in the workplace to ensure the tools function as intended and do not create discrepancies between protected categories. Effective AI audits and assessments can be useful defensive evidence in potential discrimination lawsuits.
3. Ensure compliance with federal, state and local laws, regulations and guidelines
Employers should regularly check for updates on the ever-changing AI legal landscape and seek guidance on best practices to stay compliant.
EBG lawyers continue to actively monitor legal developments in the field of AI and have significant experience in guiding companies on AI compliance, as well as litigating on AI-related policy and impact issues. For more information, please contact your EBG lawyer.
