Washington state lawmakers are taking another run at regulating artificial intelligence, rolling out a slate of bills this session aimed at curbing discrimination, limiting AI use in schools, and imposing new obligations on companies building emotionally responsive AI products.
The state has passed narrow AI-related laws in the past — including limits on facial recognition and distributing deepfakes — but broader efforts have often stalled, including proposals last year focused on AI development transparency and disclosure.
This year’s bills focus on children, mental health, and high-stakes decisions like hiring, housing, and lending. The bills could affect HR software vendors, ed-tech companies, mental health startups, and generative AI platforms operating in Washington.
The proposals come as Congress continues to debate AI oversight with little concrete action, leaving states to experiment with their own guardrails. An interim report issued recently by the Washington state AI Task Force notes that the federal government’s “hands-off approach” to AI has created “a crucial regulatory gap that leaves Washingtonians vulnerable.”
Here’s a look at five AI-related bills that were pre-filed before the official start of the legislative session, which kicks off Monday.
HB 2157
This sweeping bill would regulate so-called high-risk AI systems used to make or significantly influence decisions about employment, housing, credit, health care, education, insurance, and parole.
Companies that develop or deploy these systems in Washington would be required to assess and mitigate discrimination risks, disclose when people are interacting with AI, and explain how AI contributed to adverse decisions. Consumers could also receive explanations for decisions influenced by AI.
The proposal would not apply to low-risk tools like spam filters or basic customer-service chatbots, nor to AI used strictly for research. Still, it could affect a wide range of tech companies, including HR software vendors, fintech firms, insurance platforms, and large employers using automated screening tools. The bill would go into effect on Jan. 1, 2027.
SB 5984
This bill, requested by Gov. Bob Ferguson, focuses on AI companion chatbots and would require repeated disclosures that an AI chatbot is not human, prohibit sexually explicit content for minors, and mandate suicide-prevention protocols. Violations would fall under Washington’s Consumer Protection Act.
The bill’s findings warn that AI companion chatbots can blur the line between human and artificial interaction and may contribute to emotional dependency or reinforce harmful ideation, including self-harm, particularly among minors.
These rules could directly impact mental health and wellness startups experimenting with AI-driven therapy or emotional support tools — including companies exploring AI-based mental health services, such as Seattle startup NewDays.
Babak Parviz, CEO of NewDays and a former leader at Amazon, said he believes the bill has good intentions but added that it would be difficult to enforce as “building a long-term relationship is so vaguely defined here.”
Parviz said it’s important to examine systems that interact with minors to make sure they don’t cause harm. “For critical AI systems that interact with people, it’s important to have a layer of human supervision,” he said. “For example, our AI system in clinic use is under the supervision of an expert human clinician.”
SB 5870
A related bill goes even further, creating a potential civil liability when an AI system is alleged to have contributed to a person’s suicide.
Under this bill, companies could face lawsuits if their AI system encouraged self-harm, provided instructions, or failed to direct users to crisis resources — and would be barred from arguing that the harm was caused solely by autonomous AI behavior.
If enacted, the measure would explicitly link AI system design and operation to wrongful-death claims. The bill comes amid growing legal scrutiny of companion-style chatbots, including lawsuits involving Character.AI and OpenAI.
SB 5956
Targets AI use in K–12 schools, banning predictive “risk scores” that label students as likely troublemakers and prohibiting real-time biometric surveillance such as facial recognition.
Schools would also be barred from using AI as the sole basis for suspensions, expulsions, or referrals to law enforcement, reinforcing that human judgment must remain central to discipline decisions.
Educators and civil rights advocates have raised alarms about predictive tools that can amplify disparities in discipline.
SB 5886
This proposal updates Washington’s right-of-publicity law to explicitly cover AI-generated forged digital likenesses, including convincing voice clones and synthetic images.
Using someone’s AI-generated likeness for commercial purposes without consent could expose companies to liability, reinforcing that existing identity protections apply in the AI era — and not just for celebrities and public figures.
