Rep. Kim Banta, R-Fort Mitchell, discusses her bill to limit the use of AI in mental health care on Feb. 18, 2026. (Kentucky Lantern photo by Sarah Ladd)
This story is about suicide. If you or someone you know is considering suicide, call or text the suicide prevention line on 988.
FRANKFORT — Proposed legislation that cleared a legislative committee Wednesday would limit how mental health practitioners in Kentucky use artificial intelligence.
HB 455 moving forward
Rep. Kim Banta, R-Fort Mitchell, said she wants Kentucky to have a law to prevent chatbots from encouraging people to end their lives. has already happened in other states.
Stateline reported in January that several states will regulate AI as a result California And Pennsylvania. Illinois And Nevada has banned the use of AI in behavioral health for some time now New York And Utah require chatbots to tell users they are not human, according to Stateline.
Below House Bill 455licensed mental health practitioners in Kentucky would not be allowed to use artificial intelligence “to assist in providing additional support for therapy or psychotherapy that involves recording or transcribing the client’s therapeutic session” unless explicit, informed consent is given.
“I just want a human being to interact with other human beings when we deal with mental illness,” Banta told the House Licensing, Occupations, & Administrative Regulations Committee.
Under her bill, which passed the committee unanimously, mental health providers would also not be allowed to use AI to:
-
Make independent therapeutic decisions
-
Communicate directly with clients in any form of therapeutic communication
-
Generate therapeutic recommendations or treatment plans without review and approval by the licensed professional
-
Detect emotions or mental state
Eric Russ, the executive director of the Kentucky Psychological Association, said the organization generally supports the bill but wants to see some changes. The bill would ban several therapeutic tools useful in mental health treatment, including between-session homework aids and programs that identify certain risk factors in a client’s language. He would like to see those parts removed.
“One of the useful cases we’ve seen is some tools that generate AI transcripts of therapy sessions and then flag potential risk issues,” Russ told the Lantern. “This is especially useful in training environments where student clinicians work with patients. A supervisor can then get a flag saying, ‘Hey, there was some discussion about violence or suicidality. You should look into this.’ It improves communication between students and supervisors, and the language in the bill would make the use of those types of tools unnecessary.”
Eric Russ, executive director of the Kentucky Psychological Association, speaks about Senate Bill 2. March 12, 2025. (Kentucky Lantern photo by Sarah Ladd)
None of these tools are used independently of the doctor’s assessment, he said. Russ isn’t aware of chatbot suicides in Kentucky, he said, but “I do think we need a bill” that limits potential problems and allows for innovative tools.
“I would hope that this, with possibly some amendments, would be something that we could support. There are some really good pieces in here,” Russ said. “We really need to put some guardrails around the way people use these tools.”
HB 455 can now go to the House of Representatives for a vote.
SUBSCRIBE: GET THE MORNING HEADLINES IN YOUR INBOX
