Washington state is moving to set its own regulatory framework for artificial intelligence in the absence of federal legislation, laying out recommendations for how lawmakers should regulate AI in healthcare, education, policing, workplaces and more.
A new interim report from the Washington state AI Task Force notes that the federal government’s “hands-off approach” to AI has created “a crucial regulatory gap that leaves Washingtonians vulnerable.”
The report lands as the Trump administration pushes a deregulatory national AI policy and briefly considered an executive order to preempt state AI laws before putting the idea on hold after bipartisan pushback, according to Reuters.
The new report published this week notes that AI has “grown more powerful and prevalent than ever before” over the past year, driven by technical advances, the rise of AI agents, and open AI platforms transforming work and daily life.
The report lays out eight recommendations to the Washington state Legislature, including a requirement to improve transparency in AI development — mandating that AI developers publicly disclose the “provenance, quality, quantity and diversity of datasets” used to train models, and explain how training data is processed to mitigate errors and bias. The recommendation includes carve-outs protecting trade secrets.
State lawmakers introduced proposals earlier this year on AI development transparency and disclosure but their bills stalled.
The task force also recommends the creation of a grant program, leveraging public and private money, to support small businesses and startups building AI that serves the public interest — particularly for founders outside the Seattle area and those facing inequitable access to capital.
The report notes that the program would help Washington retain talent and “maintain its relevance as a tech hub.” An earlier bill to create such a program, HB 1833, stalled in the 2025 session.
Other recommendations include:
- Promote responsible AI governance for high-risk systems — defined as those with “potential to significantly impact people’s lives, health, safety, or fundamental rights.”
- Invest in K-12 STEM, higher education AI programs, professional development for teachers, and expanded broadband in rural communities.
- Improve transparency in healthcare prior authorization — requiring that any decision to deny, delay, or modify health services based on medical necessity is made only by qualified clinicians, even when AI tools are used.
- Develop guidelines for AI in the workplace, including a call for employers to disclose when AI is used for employee monitoring, discipline, termination, and promotion.
- Require law enforcement to publicly disclose AI tools they use, including generative AI for report writing, predictive policing systems, license plate readers, and facial recognition.
- Adopt NIST Ethical AI Principles as guiding framework, building on existing state guidance that already relies on the NIST AI Risk Management Framework.
Most recommendations passed by wide margins, though the law-enforcement transparency proposal drew some dissenting votes from task force members, including a representative from the ACLU.
The interim report does not yet include specific Washington-focused recommendations on generative AI in elections and political ads, AI and intellectual property, or companion chatbots, even as it highlights those issues as areas of growing state activity elsewhere.
Washington is entering the AI policy arena behind some peers that have already put broad frameworks into place, including California and Colorado. Others have targeted specific use cases.
Washington lawmakers introduced multiple AI bills in 2025, but only one passed: HB 1205, which makes it a crime to knowingly distribute a forged digital likeness (deepfake) to defraud, harass, threaten, or intimidate another, or for an unlawful purpose.
The task force report notes that 73 new AI-related laws were enacted in 27 states in 2025 across areas such as child safety, transparency, algorithmic accountability, education, labor, healthcare, public safety, deepfakes, and energy.
Washington’s task force has 19 members spanning tech companies (including Microsoft and Salesforce), labor, civil liberties groups, academia, and state agencies.
The task force, created in 2024, must deliver three reports: a preliminary report released last year, this interim report, and a final report by July 1, 2026.
Read the full interim report below.
Washington state AI task force lays out blueprint for regulation by GeekWire
