Skip to main content

The Risks of Kids Using AI Chatbots for Mental Health and Emotional Support

The Head of Adolescent Medicine at Seattle Children’s explains the risks and what parents can do

Kellie Schmitt
 | 

Published on:

boy using system AI Chatbot on mobile application
Photo:
Increasingly, kids are turning to AI chatbots for emotional connections or relationship insights. Photo: iStock

Editor’s note: This article was sponsored by the Seattle Children’s Hospital.

Interacting with AI companions can be disorienting, especially when they offer human-sounding comfort, advice or emotional support. At times, talking to a chatbot may feel just like conversing with a good friend.

“Children and teens can’t always distinguish between what’s real and what’s not,” says Dr. Yolanda Evans, the head of adolescent medicine at Seattle Children’s Hospital. “But chatbots don’t have feelings and they don’t have empathy.”

AI bots pose challenges for youth that extend beyond misinformation and factual inaccuracies. Increasingly, kids are turning to the platforms for emotional connections or relationship insights. That growing reliance raises important questions about the impact on their emotional well-being and social development, experts say.

“As chatbots and AI are becoming more and more common, kids are reaching out to something that’s not human to get advice,” she says.  

Teens turn to AI  

Already, 72 percent of teenagers have used AI companions at least once, and more than half are regular users according to a 2025 report from Common Sense Media. About a third of those teens are turning to the bots for social interactions and relationships, including conversation practice, emotional support, and friendship. While some chatbots simply provide text responses, more advanced models can form characters or avatars.

Since humans rely on language to communicate, chatbots can feel like a real person, explain authors of a paper published in Digital Health on AI and chatbot ethics. While many public-facing chatbots immediately acknowledge they’re not people, other platforms may not make that fact clear.

But chatbots can’t think or feel, and don’t have a duty to protect kids, points out Dr. Joanna Parga-Belinkie in her American Academy of Pediatrics article. As a result, they can give “false, threatening, misleading, violent or overly sexual answers and advice to young users,” the article notes.  

A one-sided relationship

The ease and accessibility of chatbot interactions can quickly draw people in. The bots respond immediately, mimic human language and appear to eagerly listen.

“It feels natural and easy to start having a conversation with this artificial intelligence,” Evans says. “It feels like you’re talking to a peer.”

Fellow humans may struggle to compete with a bot that puts the user at the center of conversations — and is quick to dish out compliments.

“Your friend and your family might be telling you something that’s hard to hear,” she says.

But navigating those messy human relationships is an essential part of adolescence and creating one’s identity. Plus, investing emotional energy in the bot can crowd out human friendships. 

“Even if the chatbot isn’t harmful, you could be ignoring peers or the relationships you could have built,” she says.

Real life examples abound

In her practice at Seattle Children’s, Evans has already witnessed numerous examples of problematic AI conversations. But even chats that seem benign could negatively influence a child’s choices. AI companions might not consider someone’s personality, history or the broader context of a situation or circumstance. Take the scenario of a teen who seeks social advice, Evans says.

“Kids will say: I’m feeling really sad. My best friend told me she doesn’t want to be around me. What should I do?” she says.

Generic chatbots aren’t specifically trained in offering age-appropriate mental health guidance. As a result, their advice might not fit a child’s distinct situation, experience or needs. In the scenario of the best friend conflict, the bot may suggest writing that person off — without considering the broader context.   

Similarly, Evans offered the example of a child who vents to the chatbot about their musical instrument. Playing the instrument feels too hard and they ask AI for advice.

The chatbot might cheerily advise trying a different instrument or ask if the user would like recommendations for a new hobby. Contrast that to the guidance of a parent or teacher who knows the specific child and their life experiences. That human conversation may consider the social interactions the band provides, and the value of those peer relationships. They might also know about other options available to the family such as choosing private lessons instead. 

Even worse: The chatbot conversations could lead to potentially dangerous outcomes, she added. For example, there have been cases of youth death by suicide after AI chatbot interactions.

What can parents do?

Children are navigating an online world in which AI-generated answers are woven into everyday queries, including school research.  

“They don’t have to go far to find it,” Evans explains. “The first thing that pops up after a search is an AI description.”

Parent controls can support safer internet use by blocking websites, filtering inappropriate content and setting screen limits. There are a wide range of options available, based on an individual family’s distinct needs, notes Common Sense Media’s guide.

With AI being so ubiquitous, though, it’s impossible for parents to monitor all digital interactions. That’s why Evans urges open conversations and ongoing dialogues about AI and bots that empower youth to sharpen their own awareness.

  • Pick a time that isn’t stressful, such as driving in the car together or eating a casual dinner.
  • Approach the conversations with curiosity rather than stress or judgment.
  • Begin a conversation with open-ended questions about your child’s overall screen usage such as: “What sites have you been looking at lately? Are there any that give you pause?”
  • Explain the importance of protecting private information, including their name, address, or school name.
  • Make sure your child knows to get real, human-powered help for mental health and emotional support.

Evans advises talking to children directly about chatbots, explaining that they aren’t human even if it feels that way.

“That might not be obvious to kids,” she says. “You have to dispel that myth.”

In the conversation, ask children their thoughts on what could go wrong with AI-generated conversations. Questions like: “Have they ever experienced a response that wasn’t what they expected?”, “Did you ever feel uncomfortable or ‘yucky’?”.

Responsibility and accountability are key

Looking forward, Evans would like to see more guardrails from companies creating AI. Incorporating people with child development expertise could go a long way in making safer products. Organizations like the American Academy of Pediatrics have supported legislation aimed at creating better digital protections for young digital users, emphasizing the need for healthier digital environments for youth. The AAP also emphasizes the role of digital companies themselves in creating child-centered designs that reduce harm, much like car seats, in an article explaining the group’s digital stance.

Indeed, advocating for better systems is a key strategy, Evans added: “Asking individual parents to be responsible for teaching everything isn’t fair,” she says.

For now, there are some helpful resources such as the AAP’s Center for Excellence on Social Media and Youth Mental Health. The takeaway message for parents: Keep the dialogue going.

“Having open conversations is so critical,” she says. “These AI chatbots are likely not going away. We need to help our kids engage and use them in a safe way.”

How to talk to your child about chatbots

  • Ask children curious, open-ended questions about their experiences with chatbots.
  • Ask children if they’ve ever had an uncomfortable experience with a chatbot.
  • Consider looking together at conversations with chatbots.
  • Explore the differences between human interactions and chatbots, emphasizing that the bots aren’t people.
  • Foster open communication and keep the dialogue going.

For more information and support, check out the following Seattle Children’s Hospital resources: 

Sponsored by:


 sch logo

 

 

JOIN THE PARENTMAP COMMUNITY
Get our weekly roundup of Seattle-area outings and parenting tips straight to your inbox.

Share this resource with your friends!