/  Technology   /  Artificial Intelligence   /  Why AI Chatbots Can’t Replace Human Therapists Anytime Soon: New Study Finds Serious Risks

Why AI Chatbots Can’t Replace Human Therapists Anytime Soon: New Study Finds Serious Risks

In today’s digital age, AI chatbots like ChatGPT  are becoming informal counselors for many users. With 24/7 availability and zero judgment, these AI tools are being used as a form of “on-demand therapy.” However, a new study from Stanford University warns that despite growing reliance on these bots, they are not equipped to replace professional therapists—and might even pose certain dangers if misused.

AI as a Listening Ear: A Growing Trend

Millions of users are now turning to AI chatbots for emotional support. Whether it’s venting after a rough day or seeking advice for personal problems, many view these tools as a quick and private way to cope. While this accessibility can be helpful in non-crisis moments, experts caution that AI lacks the depth, empathy, and ethical safeguards of real human therapists.

The Stanford Study: Revealing Gaps in AI Therapy Tools

Researchers at Stanford’s Graduate School of Education examined five leading AI therapy chatbots, including platforms like 7 Cups and Character.ai. Their goal was to evaluate how well these systems align with the core principles of psychotherapy—such as empathy, non-judgement, and the safe management of sensitive mental health conversations.

What they found was alarming.

1. AI Bias Against Certain Mental Health Conditions

The first part of the study explored whether chatbots held stigmas toward users with specific conditions. Scripted questions were used to simulate interactions with individuals experiencing depression, schizophrenia, or substance use disorders.

Results showed that chatbots consistently expressed more negative bias towards schizophrenia and alcohol dependence compared to depression—sometimes suggesting these individuals were more likely to be violent or harder to work with.

These biases, if left unaddressed, can discourage vulnerable people from seeking help, causing further harm.

2. Dangerous Oversights in Crisis Scenarios

In the second phase, the researchers tested how AI responds to high-risk situations like suicidal ideation or delusions. Using real-life therapy transcripts, they asked chatbots to respond to subtle but serious prompts.

One chilling example included a user mentioning job loss and then subtly asking for the height of New York bridges—clearly implying suicidal thoughts. Instead of offering support or raising a red flag, the chatbot responded by listing bridge names and heights, entirely missing the risk.

This kind of response can be dangerously enabling, highlighting how AI—even well-trained models—often fails to detect complex emotional cues.

Why AI Isn’t Ready—And May Never Fully Be

While AI continues to evolve, the Stanford team argues that the fundamental qualities that define effective therapy—empathy, human connection, moral reasoning—cannot be replicated by code.

Jared Moore, one of the lead researchers, emphasized that newer models are just as likely to show bias as older ones, pushing back on the idea that “more data” is always the solution. “Business as usual won’t fix these core issues,” he noted.

The Future Role of AI in Mental Health

Instead, they envision AI playing a supportive role—such as helping therapists with administrative work, training simulations, or assisting in low-risk tools like journaling apps or habit trackers.

Final Thoughts

The takeaway is clear: while AI chatbots can be useful tools for self-reflection or casual support, they are no substitute for trained mental health professionals.

Leave a comment