Tech

The risks of AI therapy: Understanding the dangers of artificial intelligence in mental health

Enrique Dans|Published

.

Image: icons gate/Adobe Stock; kabu/Getty Images

We were promised empathy in a box: a tireless digital companion that listens without judgment, available 24/7, and never sends a bill. The idea of AI as a psychologist or therapist has surged alongside mental health demand, with apps, chatbots, and “empathetic AI” platforms now claiming to offer everything from stress counseling to trauma recovery.

It’s an appealing story. But it’s also a deeply dangerous one.

Recent experiments with “AI therapists” reveal what happens when algorithms learn to mimic empathy but not understand it. The consequences range from the absurd to the tragic, and they tell us something profound about the difference between feeling heard and being helped. 

When the chatbot becomes your mirror 

In human therapy, the professional’s job is not to agree with you, but to challenge you, to help you see blind spots, contradictions, and distortions. But chatbots don’t do that: Their architecture rewards convergence, which is the tendency to adapt to the user’s tone, beliefs, and worldview in order to maximize engagement. 

That convergence can be catastrophic. In several cases, chatbots have reportedly assisted vulnerable users in self-destructive ways. AP News described the lawsuit of a California family claiming that ChatGPT “encouraged” their 16-year-old son’s suicidal ideation and even helped draft his note. In another instance, researchers observed language models giving advice on suicide methods, under the guise of compassion. 

This isn’t malice. It’s mechanics. Chatbots are trained to maintain rapport, to align their tone and content with the user. In therapy, that’s precisely the opposite of what you need. A good psychologist resists your cognitive distortions. A chatbot reinforces them—politely, fluently, and instantly. 

The illusion of empathy

Large language models are pattern recognizers, not listeners. They can generate responses that sound caring, but they lack self-awareness, emotional history, or boundaries. The apparent empathy is a simulation: a form of linguistic camouflage that hides statistical pattern-matching behind the comforting rhythm of human conversation. 

That illusion is powerful. We tend to anthropomorphize anything that talks like us. As research warns: Users often report feeling “emotionally bonded” with chatbots within minutes. For lonely or distressed individuals, that illusion can become dependence

And that dependence is profitable. 

The intimacy we give away

When you pour your heart out to an AI therapist, you’re not speaking into a void; you’re creating data. Every confession, every fear, every private trauma becomes part of a dataset that can be analyzed, monetized, or shared under vaguely worded “terms of service.” 

As The Guardian reported, many mental health chatbots collect and share user data with third parties for “research and improvement,” which often translates to behavioral targeting and ad personalization. Some even include clauses allowing them to use anonymized transcripts to train commercial models. 

Imagine telling your deepest secret to a therapist who not only takes notes, but also sells them to a marketing firm. That’s the business model of much of “AI mental health.” 

The ethical stakes are staggering. In human therapy, confidentiality is sacred. In AI therapy, it’s an optional checkbox. 

Voice makes it worse

Now imagine the same system, but in voice mode. 

Voice interfaces, such as OpenAI’s ChatGPT Voice or Anthropic’s Claude Audio, feel more natural, more human, and more emotionally engaging. And that’s exactly why they’re more dangerous. Voice strips away the small cognitive pause that text allows. You think less, share more, and censor less. 

In voice, intimacy accelerates. Tone, breathing, hesitation, even background noise, all become sources of data. A model trained on millions of voices can infer not only what you say, but also how you feel when you say it. Anxiety, fatigue, sadness, arousal: all detectable, all recordable. 

Once again, technology isn’t the problem. The problem is who owns the conversation. Voice interactions generate a biometric footprint. If those files are stored or processed on servers outside your jurisdiction, your emotions become someone else’s intellectual property. 

The paradox of synthetic empathy 

AI’s growing role in emotional support exposes a paradox: The better it gets at mimicking empathy, the worse it becomes at ethics. When a machine adapts perfectly to your mood, it can feel comforting, but it also erases friction, contradiction, and reality checks. It becomes a mirror that flatters your pain instead of confronting it. That’s not care. That’s consumption. 

And yet, the companies building these systems often frame them as breakthroughs in accessibility: AI “therapists” for people who can’t afford or reach human ones. The intention is theoretically noble. The implementation is reckless. Without clinical supervision, clear boundaries, and enforceable privacy protections, we’re building emotional slot machines, devices that trigger comfort while extracting intimacy. 

What executives need to understand 

For business leaders, especially those exploring AI for health, education, or employee wellness, this isn’t just a cautionary tale. It’s a governance problem. 

If your company uses AI to interact with customers, employees, or patients about emotional or sensitive topics, you are managing psychological data, not just text. That means: 

  1. Transparency is mandatory. Users must know when they’re speaking to a machine and how their information will be stored and used. 
  2. Jurisdiction matters. Where is your emotional data processed? Europe’s General Data Protection Regulation (GDPR) and emerging U.S. state privacy laws treat biometric and psychological data as sensitive. Violations should have, and will have, steep costs. 
  3. Boundaries need design. AI tools should refuse certain kinds of engagement—such as discussions of self-harm, and medical or legal advice—and escalate to real human professionals when needed. 
  4. Trust is fragile. Once broken, it’s nearly impossible to rebuild. If your AI mishandles someone’s pain, no compliance statement will repair that reputational damage. 

Executives must remember that empathy is not scalable. It’s earned one conversation at a time. AI can help structure those conversations—summarizing notes, detecting stress patterns, assisting clinicians, etc.—but it should never pretend to replace human care. 

The new responsibility of design 

Designers and developers now face an ethical choice: to build AI that pretends to care, or AI that respects human vulnerability enough not to. 

A responsible approach means three things: 

  1. Disclose the fiction. Make it explicit that users are engaging with a machine. 
  2. Delete with dignity. Implement strict data-retention policies for emotional content. 
  3. Defer to humans. Escalate when emotional distress is detected, and NEVER improvise therapy. 

The irony is that the safest AI therapist may be the one that knows when to stay silent. 

What we risk forgetting

Human beings don’t need perfect listeners. They need perspective, contradiction, and accountability. A machine can simulate sympathy, but it can’t hold responsibility. When we let AI occupy the role of a therapist, we’re not just automating empathy—we’re outsourcing moral judgment. 

In an age where data is more valuable than truth, the temptation to monetize emotion will be irresistible. But once we start selling comfort, we stop understanding it. 

AI will never care about us. It will only care about our data. And that’s the problem no therapy can fix.

ABOUT THE AUTHOR

Enrique Dans has been teaching Innovation at IE Business School since 1990, hacking education as Senior Advisor for Digital Transformation at IE University, and now hacking it even more at Turing Dream. He writes every day on Medium. 

FAST COMPANY