People Are Already Using AI for Therapy. The Problem Is It Was Never Designed for It
People are already using AI for therapy.
The problem is that most of these systems were never designed for that role.
More people are turning to AI tools for emotional support.
They ask questions about relationships.
They process difficult conversations.
Some talk through grief, anxiety, or loneliness.
Estimates suggest 20-50% of people have already used AI chatbots such as ChatGPT for emotional support or informal “therapy.” Among younger people the numbers are even higher, with around 1 in 8 adolescents and young adults seeking mental-health advice from AI tools.
The reasons are clear.
AI is available 24/7.
It requires no appointments.
It is often free or very low cost.
At the same time, access to human mental-health support remains limited.
What is particularly notable is that AI is also beginning to appear inside clinical practice.
A recent American Psychological Association article, “AI Is Reshaping Therapy,” reports that almost one in three practitioners (29%) now use AI at least monthly, and more than half of psychologists have used AI tools in some capacity, often for documentation, administrative work, or exploratory uses in care delivery.
So AI is entering the mental-health ecosystem from both sides.
But an important distinction often gets blurred.
Just because AI can be used for emotional support doesn’t mean it was designed for it.
Most large language models are general-purpose systems built to generate useful responses across many topics.
Human vulnerability is not a general-purpose problem.
Mental-health support requires nuance, boundaries, and responsibility. These systems may be interacting with people during some of the most fragile moments of their lives.
Some studies suggest that while many users report benefits, around 11% experience worsened symptoms, raising important questions about accuracy, crisis management, dependency, and privacy.
When technology begins to intersect with grief, trauma, and psychological distress, design decisions matter.
Questions like:
• What guardrails should exist?
• How do we avoid unhealthy dependency?
• When should technology encourage human support?
• How should clinicians shape these systems?
These questions need to shape the system from the beginning.
It’s something we think about a lot while building Solace.
Not because technology should replace therapists. It shouldn’t.
And not because technology should replace human relationships. It can’t.
But because there is a real gap between the number of people experiencing grief, loneliness, or emotional distress, and the number who receive meaningful support.
AI will inevitably become part of the mental-health ecosystem.
The real question is not whether people will use it.
The question is whether we design it responsibly for the role it is already beginning to play.
I’m curious how clinicians, researchers, and technologists are thinking about this shift.
First Published on Linkedin 12 MARCH 2026