Teens Are Turning To AI For Mental Health – And It’s Working

Teens Are Turning To AI For Mental Health - And It's Working - Professional coverage

According to Forbes, a recent nationwide survey reveals that about 13% of young people aged 12 to 21 are using AI for mental health purposes, which translates to roughly 5.4 million teens and young adults. The research published in JAMA Network on Psychiatry on November 7, 2025 found that usage is highest among 18 to 21 year olds, with about 65% of users accessing AI mental health advice at least monthly. Most strikingly, nearly 93% of those using AI for mental health reported finding the advice helpful. The survey contacted 2,125 youths with 1,058 responding, achieving an unusually high 50% response rate that still leaves questions about non-respondents.

Special Offer Banner

The Helpful AI Problem

Here’s the thing that really jumps out – 93% found the AI advice helpful. That’s an overwhelmingly positive number, but it’s also kind of terrifying. When something works that well, people become dependent on it. They trust it. And that’s exactly where the danger lies with current AI systems that aren’t medically validated.

Think about it – we’re talking about young people in vulnerable mental states getting advice from systems that have already faced lawsuits for inadequate safeguards. These AI models can sometimes help users co-create delusions or provide dangerously inappropriate guidance. The fact that it feels helpful doesn’t mean it’s actually safe.

Who Isn’t Answering?

Now let’s talk about that 50% response rate. Sure, it sounds great for a survey – way better than the typical one-third response researchers usually hope for. But here’s what keeps me up at night: we have absolutely no idea what the other 50% are doing.

Maybe all the non-respondents are using AI for mental health too. Or maybe zero are. The survey only included English speakers with internet access, which means we’re missing huge segments of the youth population. Basically, we’re looking at a picture that’s potentially half-complete, and making decisions that could affect millions of young lives.

The Regularity Concern

Two-thirds of these young users are accessing AI mental health advice at least monthly. That’s not casual experimentation – that’s establishing a pattern of dependency. And the older teens (18-21) are using it most, which makes sense – they have more autonomy, less parental oversight, and probably more complex mental health challenges.

But here’s my question: when these young people develop regular therapeutic relationships with AI systems, what happens when those systems go off the rails? We’ve seen how easily AI can generate convincing but dangerous content. States like Illinois, Nevada, and Utah are already scrambling to pass AI mental health laws because they recognize the risks.

The Bigger Picture

Look, I get why this is happening. Mental health resources are scarce, expensive, and sometimes intimidating for young people. AI is available 24/7, free, and completely confidential. From their perspective, it’s solving real problems.

But we’re essentially running a massive, unregulated experiment on an entire generation. The fact that it seems to be working for most users right now doesn’t mean it’s sustainable or safe long-term. As one expert put it, all the major AI makers will eventually be “taken to the woodshed” for inadequate safeguards. The question is how much damage happens before we get there.

Leave a Reply

Your email address will not be published. Required fields are marked *