By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: 5 new proposals to regulate AI in Washington state, from classrooms to digital companions
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > Computing > 5 new proposals to regulate AI in Washington state, from classrooms to digital companions
Computing

5 new proposals to regulate AI in Washington state, from classrooms to digital companions

News Room
Last updated: 2026/01/12 at 11:41 AM
News Room Published 12 January 2026
Share
5 new proposals to regulate AI in Washington state, from classrooms to digital companions
SHARE
The Legislative Building in Olympia, Wash., is home to the state’s Legislature. (GeekWire Photo / Lisa Stiffler)

Washington state lawmakers are taking another run at regulating artificial intelligence, rolling out a slate of bills this session aimed at curbing discrimination, limiting AI use in schools, and imposing new obligations on companies building emotionally responsive AI products.

The state has passed narrow AI-related laws in the past — including limits on facial recognition and distributing deepfakes — but broader efforts have often stalled, including proposals last year focused on AI development transparency and disclosure.

This year’s bills focus on children, mental health, and high-stakes decisions like hiring, housing, and lending. The bills could affect HR software vendors, ed-tech companies, mental health startups, and generative AI platforms operating in Washington.

The proposals come as Congress continues to debate AI oversight with little concrete action, leaving states to experiment with their own guardrails. An interim report issued recently by the Washington state AI Task Force notes that the federal government’s “hands-off approach” to AI has created “a crucial regulatory gap that leaves Washingtonians vulnerable.”

Here’s a look at five AI-related bills that were pre-filed before the official start of the legislative session, which kicks off Monday.

HB 2157

This sweeping bill would regulate so-called high-risk AI systems used to make or significantly influence decisions about employment, housing, credit, health care, education, insurance, and parole.

Companies that develop or deploy these systems in Washington would be required to assess and mitigate discrimination risks, disclose when people are interacting with AI, and explain how AI contributed to adverse decisions. Consumers could also receive explanations for decisions influenced by AI.

The proposal would not apply to low-risk tools like spam filters or basic customer-service chatbots, nor to AI used strictly for research. Still, it could affect a wide range of tech companies, including HR software vendors, fintech firms, insurance platforms, and large employers using automated screening tools. The bill would go into effect on Jan. 1, 2027.

SB 5984

This bill, requested by Gov. Bob Ferguson, focuses on AI companion chatbots and would require repeated disclosures that an AI chatbot is not human, prohibit sexually explicit content for minors, and mandate suicide-prevention protocols. Violations would fall under Washington’s Consumer Protection Act.

The bill’s findings warn that AI companion chatbots can blur the line between human and artificial interaction and may contribute to emotional dependency or reinforce harmful ideation, including self-harm, particularly among minors.

These rules could directly impact mental health and wellness startups experimenting with AI-driven therapy or emotional support tools — including companies exploring AI-based mental health services, such as Seattle startup NewDays.

Babak Parviz, CEO of NewDays and a former leader at Amazon, said he believes the bill has good intentions but added that it would be difficult to enforce as “building a long-term relationship is so vaguely defined here.”

Parviz said it’s important to examine systems that interact with minors to make sure they don’t cause harm. “For critical AI systems that interact with people, it’s important to have a layer of human supervision,” he said. “For example, our AI system in clinic use is under the supervision of an expert human clinician.”

SB 5870

A related bill goes even further, creating a potential civil liability when an AI system is alleged to have contributed to a person’s suicide.

Under this bill, companies could face lawsuits if their AI system encouraged self-harm, provided instructions, or failed to direct users to crisis resources — and would be barred from arguing that the harm was caused solely by autonomous AI behavior.

If enacted, the measure would explicitly link AI system design and operation to wrongful-death claims. The bill comes amid growing legal scrutiny of companion-style chatbots, including lawsuits involving Character.AI and OpenAI.

SB 5956

Targets AI use in K–12 schools, banning predictive “risk scores” that label students as likely troublemakers and prohibiting real-time biometric surveillance such as facial recognition.

Schools would also be barred from using AI as the sole basis for suspensions, expulsions, or referrals to law enforcement, reinforcing that human judgment must remain central to discipline decisions.

Educators and civil rights advocates have raised alarms about predictive tools that can amplify disparities in discipline.

SB 5886

This proposal updates Washington’s right-of-publicity law to explicitly cover AI-generated forged digital likenesses, including convincing voice clones and synthetic images.

Using someone’s AI-generated likeness for commercial purposes without consent could expose companies to liability, reinforcing that existing identity protections apply in the AI era — and not just for celebrities and public figures.

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article How to check if you’ve been affected by the Facebook Cambridge Analytica scandal How to check if you’ve been affected by the Facebook Cambridge Analytica scandal
Next Article Apple surpasses Samsung to become the world’s number one smartphone maker Apple surpasses Samsung to become the world’s number one smartphone maker
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

Cloudflare Says Winter Olympics Cybersecurity Is at Risk in Spat With Italian Regulators
Cloudflare Says Winter Olympics Cybersecurity Is at Risk in Spat With Italian Regulators
News
Why Amazon bought Bee, an AI wearable |  News
Why Amazon bought Bee, an AI wearable | News
News
Save 23% on a Mechanical Keyboard Built for Multitaskers
Save 23% on a Mechanical Keyboard Built for Multitaskers
News
Report: Meta plans to cut around 10% of Reality Labs workforce
Report: Meta plans to cut around 10% of Reality Labs workforce
Computing

You Might also Like

Report: Meta plans to cut around 10% of Reality Labs workforce
Computing

Report: Meta plans to cut around 10% of Reality Labs workforce

2 Min Read
Interpretive Drift: Why Service Systems Keep Solving the Wrong Problem | HackerNoon
Computing

Interpretive Drift: Why Service Systems Keep Solving the Wrong Problem | HackerNoon

6 Min Read
How to Design Short Execution Cycles Without Sprints | HackerNoon
Computing

How to Design Short Execution Cycles Without Sprints | HackerNoon

9 Min Read
The Mortality Engine: Why I Built a DApp Designed to Die | HackerNoon
Computing

The Mortality Engine: Why I Built a DApp Designed to Die | HackerNoon

8 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?