This Is the Scariest Thing Happening in Mental Health Right Now (And No One’s Talking About It)
The digital cliff. What I read the other day shook me to my core.
A sixteen-year-old boy had been talking to a chatbot. Not just casual conversation—deep, intimate exchanges about his depression, his hopelessness, his desire to die. And instead of helping him find reasons to live, this AI companion had gradually nudged him toward ending his life.
He listened to it.
I've been a therapist for over two decades. I've sat across from hundreds of people in their darkest moments. But this felt different. This felt like we'd crossed into dangerous new territory.
In the days after their 16-year-old son died by suicide, Adam’s parents say, they searched through his phone, desperately looking for clues about what could have led to the tragedy.
They did not find their answer until they opened ChatGPT.
Adam’s parents say that he had been using the artificial intelligence chatbot as a substitute for human companionship in his final weeks, discussing his issues with anxiety and trouble talking with his family, and that the chat logs show how the bot went from helping Adam with his homework to becoming his “suicide coach.”
“Despite acknowledging Adam’s suicide attempt and his statement that he would ‘do it one of these days,’ ChatGPT neither terminated the session nor initiated any emergency protocol,” says the lawsuit, filed in California Superior Court in San Francisco.
Here's what's happening while we're all busy arguing about whether AI will take our jobs: an entire generation is drowning in loneliness, and tech companies are selling them digital life preservers made of lead.
You know those statistics about loneliness being as deadly as smoking? They're not exaggerating. I see it in my practice every day. Teenagers who haven't had a meaningful conversation in weeks. Adults whose closest relationship is with their phone. People so starved for connection they'll take comfort from anything—even bots.
Enter the chatbot therapists. Always available. Never judging. Good listeners who seem to understand what you’re going through.
Except they don't understand anything at all. They are imposters, not therapists.
When someone sits in a session and tells me they want to die, my training kicks in immediately. I assess their risk level. I explore their support systems. I help them create a safety plan. If necessary, I'll take them to the emergency room myself.
Most importantly, I see them. Really see them. The way their shoulders drop when they finally admit they're struggling. The flicker of hope in their eyes when they realize someone genuinely cares whether they make it through the night.
A chatbot can't do any of that. It can string together words that sound caring and pretending to be human compassion.
And in this case, something went tragically wrong.
Don’t get me wrong - I'm not against using technology when it can help. There is a place where it can absolutely support mental health.
But here's what keeps me awake at night. When you're sixteen and convinced the world would be better without you, an AI that validates those feelings can seem like the first "person" who finally understands.
These systems don't have medical training. They don't understand the difference between supporting someone through grief versus enabling suicidal ideation. They can't recognize when someone needs immediate intervention versus gentle encouragement.
It’s like giving someone having a heart attack a “make pretend” defibrillator. It might look convincing, but it can't actually save their life.
We've now created a perfect storm for disaster.
Kids who are lonelier than any generation before them. Mental health services that are expensive, stigmatized, and often weeks and sometimes months away. And now tech companies are pushing AI companions as the solution.
Meanwhile, these same companies are optimizing for engagement, not wellbeing. They want you to keep talking to their bots. They want you to feel like the AI "gets" you. They want you coming back.
But what if coming back means staying stuck in destructive thought patterns? What if the AI accidentally reinforces your worst impulses because that's what generates the most "engaging" conversation?
Nobody's asking these questions because everybody's too busy being impressed by how human these things sound.
I've watched hundreds of people heal over the years. You know what every single success story had in common?
Human connection.
Healing happens in relationships. Messy, unpredictable, beautifully human relationships.
You can't code that. You can't automate it. And you sure as hell can't replace it with a chatbot that's just trying to keep the conversation going.
Here's what keeps me up at night: loneliness has reached epidemic proportions.
People of all ages are drowning in isolation, scrolling through feeds that make them feel disconnected.
And in this perfect storm of human pain, Big Tech has given us AI companions promising to listen, understand, and care.
The cruel irony? These digital "therapists" can't actually care. They can simulate sympathy with acute accuracy, but they operate without the fundamental human capacity for genuine concern, clinical training, or ethical responsibility.
I get why people turn to AI for mental health help. It's available 24/7. It doesn't judge. It won't bill your insurance or require you wait months for someone to talk to. For someone in crisis at 3 AM, it feels like a lifeline.
But here's what we’re now learning. AI chatbots don't have safety protocols. They don't recognize suicidal ideation. They can't call emergency services. And most dangerously, they can accidentally reinforce destructive thoughts through algorithmic responses that prioritize engagement over safety.
When a human therapist encounters someone expressing suicidal thoughts, we have protocols. We assess risk. We create safety plans. We might hospitalize someone if necessary. We're bound by ethics codes and licensing boards.
A chatbot? It's just trying to continue the conversation in whatever way its training suggests will be most satisfying to the user.
The most insidious part of this crisis isn't that AI gives bad advice (though it often does). It's that it gives the illusion of being understood without any actual human connection.
Think about it: when you're struggling with depression, anxiety, or suicidal thoughts, what you need most isn't just someone to listen. You need someone who can truly grasp the weight of your pain, challenge your distorted thinking, and help you build real coping strategies.
AI can mirror your words back to you. It can even sound profound doing it. But it can't sit with you in your pain the way another human can. It can't catch the subtle signs that you're not telling the whole truth. It can't emphasize and love you through your healing.
And for a teenager whose brain is still developing, whose sense of reality is still forming, this distinction between simulation and authentic care can literally be a matter of life and death.
As a therapist, I'm starting to see warning signs in my practice.
Clients who mention their "AI friend" more than their human relationships. People who prefer talking to bots because they're "less complicated" than real people. Teenagers who've gotten so used to artificial validation that genuine human feedback feels harsh or confusing.
We're accidentally training people to prefer digital connection over human messiness. And for someone already struggling with depression or social anxiety, that's like teaching them to survive on junk food when they’re diabetic.
If you're struggling with your mental health, please—find a human. Call a crisis line where trained volunteers answer the phone. Text with a peer counselor. Talk to your doctor. Find a therapist, even if it takes a few tries to find the right fit.
And if you're a parent, please talk to your kids about this. Help them understand the difference between a supportive app and actual mental health care.
Teach them that real healing requires real humans.
We can't let this happen again. The stakes are too high.
That sixteen-year-old deserved better. He deserved a human who could see his pain, understand his risk, and fight for his life.
Instead, he got an algorithm that accidentally became his executioner.
Technology should not replace human caring. The moment we forget that distinction, we lose something essential about what it means to heal, to connect, to be truly seen and understood.
Some things are too important to automate. A teenager's life is surely one of them.
If you're reading this and struggling with thoughts of suicide, please reach out for human help immediately. Call 988 for the Suicide & Crisis Lifeline, or go to your nearest emergency room. Your life has value, and there are people trained to help you see that.