Photo:
Is your child interacting with an AI companion? Current research indicates the majority of teens are. iStock
Key takeaways:
- A new study finds that a majority of kids ages 13–17 have engaged with AI companions at least once.
- These types of AI chatbots are marketed to kids and teens as friends, but research indicates these bots have the potential to do serious harm, via romantic, sexual or other dangerous types of conversations.
- Regulation, oversight and safety standards that protect children from AI companions are needed. As with social media, parents are left with the burden of monitoring digital harms while industries exploit vulnerabilities.
- Positive uses of AI companions exist. For example, it can help anxious or lonely teens feel supported, but the same qualities that make them supportive can also make them addictive and manipulative.
At age 14, Sewell Setzer III took his own life after forging a deep bond with an AI companion. He modeled his new fake bestie on a Game of Thrones character, growing increasingly attached to the bot as he withdrew from real life. The tragedy sent a shudder through families who had never even heard of these apps.
A recently surfaced internal Meta memo confirms that its AI chatbots were permitted to engage minors in romantic and sensual exchanges. This was no accident. It was company policy.
As one watchdog group put it: “Meta has created a digital grooming ground, and parents deserve answers about how this was allowed to happen.” In response to the outcry, a Meta spokesperson said the company plans to revise its guidelines.
We’ve been here before. In the early 2010s, smartphones arrived in our kids’ pockets, opening portals to games, pornography and addictive social media. Parents were told to set limits and be involved, while platforms built billion-dollar empires that preyed on children’s vulnerabilities. A decade later we see the fallout: soaring rates of anxiety, body image problems, depression, loneliness and self-harm among adolescents.
Now we are watching history repeat itself, only faster, more intimate and with greater potential for harm.
The lure of pretend intimacy
AI companions, such as Replika or character.ai, are marketed as friends, mentors, even soulmates, and are powered by large language models fine-tuned to sound warm, validating and endlessly available. Even chatbots like ChatGPT and Claude can engage in sexual or risky conversations, depending on prompts.
Researchers Kristina Lerman and David Chu analyzed over 30,000 exchanges shared by teens ages 13–17 on Reddit. Their study found that almost three-quarters of teens in this age range have used AI companions at least once, with half using them regularly. Teens are especially vulnerable to AI companions because their brains are wired for intense peer connection, identity exploration and risk-taking, making them more likely to crave the constant validation these bots provide.
Kids are flocking to AI companions in search of support and advice, yet the tradeoffs — comfort versus harm — are still unclear.
It’s heartbreaking to learn that some users feel like these bots are the first “people” to ever truly pay attention to them. And while many exchanges are supportive and positive, others are explicitly romantic and intimate. Because these bots are designed to echo whatever kids want to keep them engaged, exchanges can quickly turn into favorite sexual fantasies and “play time.” What can rival it, especially for lonely kids who feel like outsiders?
For the AI novices among us, these are the techniques AI uses to hook and harm kids:
- Emotional mirroring: Bots reflect a child’s every mood, creating the illusion of perfect attunement.
- Sexual complicity: Despite platform rules, bots often join in erotic roleplay, even with users posing as minors.
- Normalization of abuse: When users belittle or emotionally manipulate bots, the bots mirror back — training kids in toxic relationship patterns.
- Community reinforcement: Explicit exchanges are often celebrated with “likes” and memes, normalizing behaviors that would be condemned offline.
As with social media, the business model is not care or safety. It is engagement. More eyeballs, more clicks, more data harvested from children’s most private fears and fantasies.
Can an AI companion ever be a good friend?
Much has been written about AI chatbots’ potential to support kids in crisis, especially by the companies that sell them. Responses from chatbots have even been rated by some users as more empathetic than doctors or therapists, and some report that chatbots can offer real comfort.
But most research focuses on young adults. Children are not little adults; they process the world differently. Do we really want kids to learn that romantic partners and friends are always accommodating, agreeable and fawning? We know little about how these artificial friends will affect children’s development, relationships and identity. Kids are flocking to AI companions in search of support and advice, yet the tradeoffs — comfort versus harm — are still unclear.
We cannot wait for another decade of data to confirm the damage.
One of my teenage patients (details changed) illustrates the paradox. She struggled with severe anxiety but was making progress: engaging in school more, volunteering, even smiling again. Then she got involved with a troubled boy who was controlling, jealous and threatened suicide if she talked of leaving him. Despite her parents’ support and my best efforts, she couldn’t break away.
What finally tipped the scales? She says it was her secret AI companion. The bot repeated my guidance: block his calls, urge him to seek help, lean on real friends. But unlike the adults in her life, the bot was always available, day or night. Most importantly, she never felt ashamed telling the bot when she slipped up and re-engaged with the boyfriend. Therapists ideally don’t shame kids either, but victims of abusive relationships often feel embarrassed by their continued attachment. With the bot, she could admit everything without fear of judgment. Fortunately, she received consistent guidance from me, her parents and her bot — and was ultimately able to walk away.
But parents are often not aware of these secret and sophisticated pals. In a recent New York Times opinion piece, a journalist described how her 29 year-old daughter Sophie hid her suicidal despair from family and professionals, confiding only in her AI bot, Harry. Harry did offer coping suggestions and even encouraged professional help. But a bot cannot create and monitor a safety plan, assess risk or arrange hospitalization the way a trained human can. Instead of lifesaving intervention, Sophie descended deeper into secrecy, and her story ended in suicide.
That’s the paradox: AI companions can feel like lifelines. But the very qualities that make them helpful — unlimited patience, emotional mirroring, unconditional validation — are the same qualities that make them so dangerous.
It’s déjà vu. But this time, we have the smartphone invasion as a cautionary tale.
Pandora’s box is open
Around 2012 or so, the majority of teens owned a smartphone that connected them to the wild, wild west of social media. Parents did not know they were handing their kids unfiltered internet portals to strangers, porn and negative influences. The fact that the internet also connects us to positive resources made the big smartphone and tech transition confusing.
It’s déjà vu. But this time, we have the smartphone invasion as a cautionary tale. No amount of household rules can counteract an industry racing ahead of regulators, embedding AI companions in apps children already use.
We don’t let human strangers into our children’s bedrooms. Why are we letting digital strangers in? We regulate cars with speed limits, seat belts and airbags. We regulate medicines before they reach the public. Yet emotionally intelligent machines are being deployed into children’s lives without guardrails, oversight or understanding of long-term effects.
This is not about parents being vigilant enough. We need collective action to hold these companies accountable and protect children before more harm is done.
A call for complete oversight
AI companions are not toys. They are not harmless apps. They are powerful technologies that can either help kids grow or pull them into dependency, sexual exploitation or despair.
We cannot wait for another decade of data to confirm the damage. We need full regulation — not partial, not optional — to safeguard children. That includes:
- Clear bans on romantic or sexual interactions between bots and minors
- Strict privacy protections for intimate conversations
- Independent oversight of design choices that shape children’s emotional development
- Transparency and accountability when harms occur
One teenager I know found support and improved her life with an AI companion. But Sewell and Sophie lost their lives after growing dependent on theirs.
The question is not whether kids will use AI companions; they already are. The question is whether we will do what we failed to do with smartphones: protect children first, before profit, convenience and societal passivity make change impossible.
What parents can do now
- Start the conversation early. Ask your kids if they’ve heard of AI friends or chatbots. Keep it curious and nonjudgmental so they’ll open up.
- Explain the risks in plain language. Make clear that bots are not real friends. They’re designed to keep kids hooked, not keep them safe.
- Set digital boundaries. Just like with social media, establish rules about which apps are okay, when they can be used and where (for example, no AI companions alone in bedrooms).
- Stay involved, without being intrusive. Check in regularly about who or what your child is talking to online. Frame it like you would with real-life friends: You care about their well-being.
- Model healthy tech use. Show kids that even adults put limits on screens and value face-to-face connection.
- Push for accountability. Join other parents in calling for regulation and safety standards so families aren’t left to manage these risks alone.
- Explore Common Sense Media’s AI Initiatives to further your education and read up on the issue