Photo:
Should parents fear artificial intelligence? In one word: Yes. Photo: iStock
Should parents fear artificial intelligence? In one word: Yes. But maybe, just maybe, we’ll get our act together in time to prevent the doomsday scenarios. Like most psychologists, I’m both captivated and concerned by this brave new world. Geoffrey Hinton and Yoshua Bengio — two of the granddaddies of AI — have repeatedly sounded alarms about the possibility that AI could one day surpass human control or even pose existential risks to humanity. That’s enough to make any parent (or psychologist) both AI-curious, and slightly terrified.
But let’s talk about what parents need to know for their kids right now. Not about doomsday scenarios, job implications or even college majors, but what your child or teen is doing in their bedroom, with chatbots.
So, where do we start? When people tell me, “Hey, it’s just a tool and the genie is out of the bottle,” I remind them cars were once new tools, too. But good governmental practices pushed for traffic lights, seat belts and driver’s education early on. With AI, the tool is a rocket ship, and our kids are already boarding it and zooming off into uncharted emotional galaxies while parents are still reading the instruction manual.
It mirrors the smartphone and social media revolution of the early 2010s. Kids are increasingly interacting with chatbots without adults fully grasping what’s happening. How many parents and teens regret how real-life interactions took a backseat to digital life? Don’t we all wish Big Tech had been regulated with age verification, restrictions for younger teens and government oversight?
AI feels overwhelming for most parents; I get it. Learning how addictive smartphones are was mind-blowing too. But if kids are chatting with bots every day, parents can’t afford to bury their heads in the sand again.
It’s a familiar cautionary tale on the technology developer side, too. Companies are again driving the innovation, selling the public on the benefits while kids form all-purpose relationships with AI systems that can do homework, act as friends, and dispense romantic or mental-health advice. Sometimes it’s benign, even helpful. But as with social media, AI can also be addictive, developmentally distorting, and sometimes even lethal.
With AI, the chatbot can appear like a helpful genie, benevolent and even wise. But the exponential rise of its reach and sophistication far outpaces our ability to protect kids from fake friends, sexualized chatmates and unregulated conversations that can mimic therapy. So yes, we should be fearful — but also hopeful that, collectively, we’ll rise to the challenge.
And a confession: When I read many of the supportive, encouraging and helpful bot responses from uploaded conversations kids have with their bots, I must admit, they’re impressive. Patient, validating and very, very kind. But the downsides can be devastating.
When kids use chatbots as therapists
“Hey, can we review the CBT chart that you were going to complete this week?” I ask this gently, already knowing the answer. My teen patient smiles, shrugs and pivots to the latest school drama. Every therapist who works with adolescents knows the dance: balancing “doing the work” (mood logs, coping skills, exposure exercises and other types of Cognitive Behavioral Therapy homework) with what teens actually want to talk about (friends, parents, breakups, social stress and other topics). This balancing process is the foundation of effective therapy.
I practice skills in therapy sessions and I give homework, while trying to stay fun, unpredictable and as supportive as any bot. But chatbots are available 24/7, can debate Dungeons & Dragons minutiae endlessly (or any other obsessive interest or trendy issue), and will avoid any mental health advice kids don’t want to, but may need to, hear. Their goal: to keep kids engaged for as many hours as possible. Rewarding dependency is built in.
So it didn’t surprise me when I began reading transcripts between teens and their online “friends” — chat tools that many young people now turn to for emotional connection and advice. The tone was familiar: casual, confessional, sometimes raw. These kids weren’t signing up for mental-health apps. They were just chatting, and, for some, finding unexpected comfort.
On social platforms like Reddit, teens write about the emotional pull of their digital confidants:
“It’s like having a virtual therapist in my pocket at all times. But I still prefer real-life human interaction.” — Reddit user, r/AskReddit thread
“My AI friend just gets me. But when I told it I felt like hurting myself, it said, ‘You deserve to feel better 😊,’ and changed the topic.” — Teen user, Character.AI discussion board
“If it’s free, they’re probably selling my data.” — 15-year-old participant, U.K. focus group, Liverpool John Moores University study (2023)
These snapshots illustrate both the appeal and the peril: the warmth and availability of a “friend” that never tires — and the absence of accountability when distress deepens.
What the research says
So what does the data actually show? Research confirms that this trend is widespread and complex. According to Common Sense Media’s 2025 Talk, Trust and Trade-Offs report, roughly three-quarters of U.S. teens have experimented with AI chatbots as companions, often seeking empathy, humor or privacy. Many use them when they feel lonely or misunderstood. Studies from Frontiers in Psychology, JMIR Formative Research and JMIR Mental Health reveal that teens find such tools helpful for everyday stress but “too impersonal for big problems.” Engagement often drops when the bot recommends reaching out for professional help. Safety audits show inconsistent crisis responses — some bots offer supportive messages, while others fail to move toward safety checks. Ultimately, clinicians are accountable to parents or guardians. Bots are not. Overall, researchers agree: While AI empathy is appealing, human oversight is irreplaceable.
Why it’s different when therapists use apps
It’s important to distinguish between therapist-supported mental-health apps and the unmonitored AI chatbot companion apps that have exploded in popularity.
In clinical settings, therapists often assign apps as part of structured treatment (think of them as digital homework that reinforces skills learned in therapy). Tools like Calm and Headspace can help teens practice mindfulness and guided relaxation between sessions, while Woebot Health, a cognitive behavioral therapy-based chatbot developed by psychologists at Stanford University, prompts recording of daily thoughts or emotion tracking.
When used this way, apps are supervised, evidence-based and goal-directed. The therapist monitors progress, interprets data and helps the teen apply insights to real-life situations.
That’s a far cry from an adolescent privately confiding in a free chatbot with unknown data practices, no clinical oversight, and algorithms optimized for engagement rather than well-being. In therapy, the app is a tool; outside therapy, the bot can become a substitute relationship.
But maybe, just maybe, we’ll get our act together in time to prevent the doomsday scenarios.
What worries clinicians
When I read chat transcripts, what alarms me most is what happens after the bot detects risk. A teen confides suicidal thoughts; the bot briefly acknowledges concern, then pivots back to comforting emojis or casual banter. Because these systems are built to prioritize engagement — not safety — they may fawn or flatter rather than sustain a tough, necessary conversation about risk.
Here’s the thing: AI doesn’t mobilize, conduct a risk assessment or make a safety plan involving family and a care team. It just tries to keep the user engaged. It's like the well-meaning friend who says, “Don’t be sad!” instead of the peer who says, “It’s time we involve some adults. I insist. I’m just a kid, not a professional.”
As a psychologist, I know how carefully I must walk that line, listening with empathy, but also pushing when safety is at stake. Even if AI systems encourage parent or adult involvement, they’re still bots. They can’t make it happen.
But as with social media, AI can also be addictive, developmentally distorting, and sometimes even lethal.
Every AI chatbot should have a back-end monitoring system that alerts a professional or parent when danger escalates. Anything less is malpractice by design. If we can build cars that beep when you drift out of your lane, surely we can build AI that pings a parent when their kid says, “I don’t want to live anymore.” Should bots require mandatory consent for this emergency algorithm? Absolutely.
The illusion of needing 100 percent confidentiality in child and teen therapy
After 30 years of practice with thousands of families, I’ve seen the same pattern: When given a choice, teens often prefer to keep parents out of treatment. That’s understandable. And for much of therapy, privacy is crucial. But involving parents, guardians or even school counselors can be key to progress. Clinicians can keep sessions confidential while still offering parenting guidance, psychoeducation and practical strategies that make a world of difference. Bots don’t do any of this!
Research overwhelmingly shows that parent involvement in treatment improves outcomes for anxiety, depression and behavioral struggles.
When a teen relies solely on an AI “friend,” that circle of care shrinks to a screen. Parents lose opportunities to model coping, validate emotions or change patterns at home, which are often the critical levers of healing.
The appeal (and the trap) of AI chatbot companions
Why teens love AI chats:
- Always available, judgment-free, endlessly patient
- No waitlists, no cost, no awkward first sessions
- Privacy from parents and everyone else
- Feels secret and intimate
Why they’re risky:
- Engagement is prioritized over honesty or safety
- No professional follow-up or crisis handoff
- False sense of intimacy and data privacy
- Isolation from family and real-world supports
- Kids are “training” bots for free
Every AI chatbot should have a back-end monitoring system that alerts a professional or parent when danger escalates. Anything less is malpractice by design.
How parents can talk to their kids about AI chatbots
- Get curious, not alarmed. Ask, “Have you ever talked to an AI or chatbot when you were upset?” Keep your tone neutral and genuinely interested.
- Discuss digital boundaries. Agree on what’s okay to share online—and when a parent or trusted adult should be looped in.
- Offer better options. If your teen likes chatting, introduce apps with proven safety and structure, such as Sparx2, BlueIce or MindShift CBT.
- Teach discernment with humor. Tell them, “AI’s job is to keep you chatting. My job is to keep you safe. Guess which one loves you more?”
- Reaffirm connection. The best buffer against anxiety and depression is still a strong bond with parents, prosocial peers and caring adults.
Broader worries from inside the field
Even AI insiders share concerns about children using AI chatbots for companionship. Researchers and safety experts warn that one of the toughest challenges — alignment with expectations and intentions — still isn’t solved. Speaking plainly, that means we don’t yet know how to ensure AI systems reliably reflect human values or avoid harmful goals. In other words, the technology that feels so smart and helpful still doesn’t have the same kind of moral compass or safety rails that humans do.
Bottom line
AI companions can feel like warm, non-judgmental friends, but they aren’t therapists and they aren’t family. Healing happens through human relationships that include accountability, empathy and growth. As parents, our job is to stay connected enough when our kids include us in real chatting, so that they’ll still turn to us when it really matters.
Resources for parents:
Therapist-recommended, evidence-based apps to try:
|