Dartmouth’s AI Therapist has completed a Randomized Controlled Trial (RCT), but what does this mean for the future?
Background
Since the days of ELIZA, mankind has endeavored to understand itself through the soulless silicon eyes of a machine. It seems like now, in 2025, that reality is finally here. Two articles or studies published in the last month have challenged how I see computer-assisted therapy. However, with my background in data and technology, it has also fueled my fears. I focused those fretful thoughts into my science fiction writing, my cyberpunk book, Above Dark Waters, which is about how someone uses their AI Therapist to improve their brain, sending humanity snowballing towards the singularity.
The first piece is a study from Dartmouth College called Randomized Trial of a Generative AI Chatbot for Mental Health Treatment, which was published last month. They had a sample size of 200, and split them into control and testing groups of ~100 each. The results were very promising as the control group showed decreases across three types of disorders.
Overall, results from the Therabot RCT are highly promising. We found high engagement and acceptability of the intervention, as well as symptom decreases while maintaining a therapeutic alliance comparable to that of human therapists and their patients.
The second is the article from the Harvard Business School titled, “How people were really using gen AI,” which itself was a follow-up of a 2024 article. I have trimmed down their infographic below, but as you can see, the top three use cases in 2025 are self-help related, with Therapy/Companionship taking top spot!
No matter if you think this is a fad or if it does not remotely interest you, the AI Therapists are coming! I think there are many positives for this technology, but in this article, I want to cover only the pitfalls.
5 Potential Problems with AI Therapy
This is only my initial list. I’m sure there are many more, but I will briefly cover each of the following areas of concern:
No.5: Addiction Risks
This one is the most obvious and (thankfully) easiest to control for. Many of these applications tout 24/7 access as the main selling point, but I think this is indeed a negative. Why wait two weeks or a month for therapy when you can have it in your hands right now! In fact, the Dartmouth study showed a nice heatmap of the usage for us. As you can see, there are several user-days with over 100 messages. Other users used it nearly every day of the study.
This is a success as far as getting people to use the product. But is this a success in the sense that the participant no longer needs therapy? I would also love to see this by the time of day. Are people waking up in the middle of the night, using the therapist, and not sleeping well? Because we all know one way to help with anxiety is to get sufficient rest. Fortunately, these kinds of controls are easy to implement technically. However, there seems to be little reason for a private company to disincentivize usage. The goal of therapy is to one day stop using therapy!
No. 4: Data Privacy Risks
This one is also obvious. If you can’t trust who you’re talking to, you’re never going to say anything of value. And if you’re going to self-censor, then you may never go down a potentially fruitful line of questioning and land on a potentially fruitful and/or nourishing answer. The mere hint of data breaches might be enough to destroy any faith you have in the provider, and as such, you’ve lost the potential history you have with the artificial therapist. And this isn’t a hypothetical scenario, this is the reality. Data from BetterHelp ended up being sold to Meta, based on this FTC complaint.
To capitalize on these consumers’ health information, Respondent handed it over to numerous third-party advertising platforms, including Facebook, Pinterest, Snapchat, and Criteo, often permitting these companies to use the information for their own research and product development as well.
Now, this isn’t exclusive to digital therapists. This can happen to real therapists, too. A therapist breaching trust in a small town would be devastating to the patient; however, the scope of the breach would be small. What would a patient do? They would never go back to that therapist. A major data breach might have a chilling effect on the entire AI therapy ecosystem.
No. 3: Corporate or Government Control
Perhaps these AI therapists will be open-sourced or run by a non-profit; regardless of any future hopes, most of these seem to be run by tech startups. And, if there’s one thing a startup desperately needs, it’s cash! They have to make it to the next round of funding. What happens if the corporation sells even the metadata to other companies for ads? Will the fact that you have an account with a company be enough to push pills, teas, creams, retreats, or books?
This brings me to a topic which I will also fit into point number 4: Mental Health Data Weaponization. There are cases of people being locked out of their money by banks for dubious reasons. Imagine if what you say to your therapist flags you on the no-fly list?
Governments can and do subpoena companies for data. The US recently subpoenaed data for companies in other countries, like in this article, US Lawmakers Subpoena China Telecom Giants. The UK has put people in jail for their social media posts.
There are also attempts to exert more subtle control over what the algorithms are nudging people towards. This is called Data Poisoning and can occur without the host company even realizing it. Imagine that someone makes hundreds of accounts, and constantly push away from the ‘right’ answer by repeatedly responding with ‘well then I’m going to self-terminate’ or something. Will the AI therapist push people to a ‘new thing’ that it thinks is going away from the ‘bad thing’? Will they suggest a final solution for struggling people like the MAID in Canada?
No. 2: Dehumanization
I’m so terrible, even a human therapist won’t take me. Does the very act of being pushed to a technical solution hurt some people?
Perhaps people are simply looking for someone, anyone, to listen to us in a non-judgmental way. Perhaps we’re in this hole precisely because of technology. Are the AI avatars of Replika going to help humanity?
Any developers out there? What happens when you layer another technological solution over an existing technological problem? It often becomes worse. Chaotic. Clunky. I have two other problems with an AI therapist: It’s verbose, and it’s always trying to solve a problem. Sometimes the best thing for a therapist to do is wait a few long moments and let the person process and cry it out… Silence and feeling heard… And giving space between the next prompt… Those would be essential to making a true connection. However, LLMs are ravenous for your next prompt.
Wait, wait, you might say, we could program those pauses in. You could, for sure, and I believe they will be successful. And then it would be even more super-human super-empathic than it already is. And then the human user will grow tired of other humans because no one is as good at talking and being emotive and empathic as their own supertherapist. Imagine for a moment we replace ‘therapy’ with ‘sexbot,’ and the algorithms become so good that no human can please you like it does. Would that be a good thing for society? No.
The deeper issue may be ‘denaturalization,’ rather than depersonalization. We just don’t fit into the 24/7 content-churning infinite scroll world we’ve built. It’s alien. Going outside and feeling the grass is common advice today. But it is rooted in some truth, as even nature can be therapy; shinrin-yoku, or “forest bathing,” might yield better results for free. Even watching the birds in trees or bees in flowers can help with anxiety. Could the silent swish of snow over a lawn be enough to heal one’s mind better than a human confidant?
No. 1: Algorithmic Bias 🎰🧑⚖️⚖️🎰
Algorithmic bias is defined as the systematic and repeatable harmful tendency in a computerized sociotechnical system to create “unfair” outcomes, such as “privileging” one category over another in ways different from the intended function of the algorithm. Ultimately, this technology shares all the same problems that algorithmic policing does. Must have been made of this in the past five years,
One solution is transparency, but it is not clear whether the LLMs actually know how they arrive at those solutions.
If you’re rural or poor, then it increases the likelihood that you will use this technology,y as access to a normal therapist is probably limited. If you’re young and don’t have/want parental help, then this is likely going to lead you to online solutions. Sometimes it’s good to have something trained on all kinds of people, but the bulk of the data is the English Language Western Technological society.
The problem is that many of these things are trained on the output of people like you. As such, it might be pushing people to the most general solution, which does work most of the time, but ends up being poorly fitting for everyone. The alternate problem comes when the data is trained completely on PLY: (People Like You). Then you get an algorithmic black hole and an echo chamber.
Conclusion
I think the promise of AI therapy is real, but its cyberpunk risks—addiction, surveillance, bias—loom large. Ultimately, I am interested in watching how the promise of AI therapy unfolds, but it helps to keep that dystopian critical eye razor sharp, because reality will always be stranger than fiction.
Also, to note, I’m sure that the people making AI therapists are great people trying to do ‘the right thing.’ I hope no one in this field views this article as an attack. I simply want us all to think about the second and third–order effects of the things we’re all building. Will this be a tool for healing, or will it just be one more techie solution to a techie problem, standing upon the already shaky foundation of our technological society?
Am I a huge wet blanket? Let me know, and if you’re looking for a sci-fi novel exploring the ethical considerations of AI Therapy and dystopian cyberpunk possibilities, then please check out my book, Above Dark Waters.