Photo:
This is not a call to ban AI or pretend it does not exist. It is a call for a pause: a coordinated, collective pause to ask harder questions before scaling. Photo: iStock
Key takeaways
- Artificial intelligence is shaping kids’ academic lives. Twenty-five percent report using AI as a tool for homework, writing or studying, often without parents’ knowledge. The biggest concern with AI use isn’t cheating, it’s learning loss.
- When AI chatbots replace the challenging mental tasks that help kids develop cognitive endurance, they lose out on opportunities to learn. This form of “academic help” may be changing brain development.
- As students rely on AI to complete core thinking tasks during critical windows of cognitive development (also called “cognitive offloading”), essential skills such as reading comprehension, critical thinking, writing fluency may atrophy or weaken. “Do your parents know how much you do it?” I ask the teen sitting across from me.
“No,” they say with a smirk. “They’re pretty clueless.”
I try again. “What about adults at school, like your teachers or counselors? Do they know?”
Usually the answer is a shrug. “Not really. But it’s no big deal. There are ways around detection. Everyone does it.”
I push once more. “Any downside? Anything you worry you’re losing by using it so much?”
That’s when the teen smiles and says, “No. Really! It’s all good.”
Are we talking about sex? Drugs? Social media?
Nope. We’re talking about AI and kids using it to do homework, schoolwork and increasingly, to manage the biggest daily stressor in most teens’ lives: school.
Welcome to the age of AI and 'Chat'
"Chat" is the new big thing among teens; it’s also a blanket term they use for any AI interaction (homework help, role playing, venting, mental health support, etc.) no matter the platform. In the past, I’ve written about teens using chatbots for companionship and mental health support. But in this article, I want to focus on something else: what’s happening as the powerful corporate interests behind AI and EdTech move rapidly into your child’s classroom and into your home without your consent and knowledge.
Industry analysts estimate that roughly $7 to $11 billion is currently being invested globally in AI for education, with projections exceeding $100 billion by the early 2030s. That kind of money does not flow without pressure to scale quickly.
To be fair, AI tools can offer real benefits to kids. They can personalize learning and target gaps, and meet students at their current skill level. For low-resource schools, AI tools can feel like a lifeline for teachers and administrators. And tutor bots offer privacy and reduce shame for students with learning challenges who compare themselves to peers. As one ninth grader told me, “No one knows I’m actually at a fourth-grade level in math when I use Chat.”
Here’s my biggest concern: AI tools are being rolled out widely in schools without meaningful research on what children may lose when they take massive shortcuts in learning.
Cognitive offloading: When thinking is outsourced
The relief teens experience by using this shortcut is what psychologists call cognitive offloading. While young brains are still under construction, kids are handing over the hardest parts of thinking to an AI chatbot. When students use AI to summarize readings, generate rough drafts, organize ideas or polish language, they are not just saving time, they are skipping the important mental work that builds reading comprehension, written expression, critical thinking and analysis — super-challenging but absolutely essential processes. Removing those steps can have serious consequences.
Struggle is a feature of learning, not a bug
Challenging mental tasks physically increase brain growth. You do not get strong by watching someone else do push ups, nor do you become a musician by listening to a playlist. Likewise, you do not become a clear thinker and communicator by repeatedly handing your thinking over to AI chatbots.
School is a gym for the mind. It is not just about correct answers. For example, school work is often designed to build the capacity to read something complex and track meaning, decide what matters, form an argument rather than just an opinion, write with precision and in one’s own voice, and revise, because much of learning happens in the rewrite.
These often resisted steps are where kids develop cognitive endurance and judgment.
AI offers a bypass to this learning. And paradoxically, its most seemingly innocuous uses may be the most harmful in terms of skipping steps in cognitive development.
Summarizing, outlining and synthesizing are not clerical tasks. They are thinking tasks; taking shortcuts removes the reps.
Neuroscience backs this up. Effortful thinking strengthens neural connections. What students find difficult is precisely what builds academic abilities. When AI does the thinking instead, kids take the shortcut without fully understanding what skills may quietly weaken from disuse.
We become what we repeatedly do
Long before neuroscience, Aristotle argued that we become what we repeatedly do. Habits shape character and judgment. In child development, they also shape cognition.
My favorite aphorism in neuroscience captures this well: “The neurons that fire together, wire together.” What humans repeatedly think, feel and do form neural pathways in their brains. Attention, language and effort are not abstract traits. They are built through use. If a child repeatedly practices wrestling with ideas, their capacity to think deepens. If they repeatedly practice outsourcing that effort, something else takes its place. They learn to rely on chatbots.
Earning grades without learning
We are already seeing this reliance on AI chatbots play out in college settings. In a recent New Yorker article, a college student described using multiple AI tools to write papers for his humanities classes. He completed assignments in less than an hour, work that would previously have taken him eight or nine. His GPA was a respectable 3.57. But when asked what he retained, his answer was blunt: “I didn’t retain anything. I couldn’t tell you the thesis for either paper.” The efficiency was real. So was his lack of knowledge.
During a recent ParentEd Talks webinar, I interviewed Sal Khan, founder of Khan Academy. We discussed his enthusiasm for AI-powered tutoring which individualizes learning. His nonprofit plans to build in safeguards so AI provides scaffolding rather than doing the work for students. Khan acknowledged, however, that many for-profit companies are eager to give users whatever they want and monetize it quickly.
This is where parents should pause. AI is not just helping kids cut corners. It is subtly changing what “doing school” means. Students risk becoming managers of output rather than builders of their own minds. And because the results often look acceptable on the surface, adults often fail to notice how much the learning underneath has thinned.
AI use among teens is common
Parents are often surprised by how casually and pervasively AI is already being used by their kids for homework, and often that means cheating. This line is very blurry — for students, parents and teachers.
Using Chat for homework is not a fringe behavior among a few rule-breakers. It is mainstream. A 2024 Pew Research Center survey found that one-quarter of U.S. teens have used ChatGPT for schoolwork, double the rate from the year before. And even that may underestimate daily reliance.
Today’s kids have grown up Googling everything, getting “news” from TikTok and shaping their realities via YouTube. From their perspective, AI can feel like just another efficiency tool.
Teens openly tell each other which bots are best for which tasks, how to prompt for better results, and how to avoid detection. Much of this happens without parents’ knowledge and increasingly without schools’ ability, or will, to meaningfully intervene.
Early warning signs
Schools, for their part, are responding with weary realism. AI is potentially available on any computer with internet access. Educators feel pressure to incorporate it and figure it out as they go. It makes their lives easier too. But when it comes to children’s cognitive development, dealing with the downsides later is not a reassuring plan. It is how we ended up struggling to rein in smartphones and social media years after harms became visible.
We have seen this movie before. In the early 2010s, smartphones and social media reshaped kids’ social lives, attention and sleep long before adults fully understood the costs.
New technologies tend to roll out quickly. Early warnings are dismissed. Kids become the test population. Only later did research and Jonathan Haidt, author of “The Anxious Generation," link widespread social media use to rising anxiety, sleep deprivation and attention problems among kids and teens. Parents, health advocates and educators are still scrambling to repair that damage. With AI, we risk repeating this pattern.
Some argue the trouble started even earlier, with laptops on every desk. Neuroscientist Jared Cooney Horvath, author of “The Digital Delusion,” documents how heavy classroom screen use correlates with lower academic performance, fragmented attention and the erosion of deep thinking. Using international PISA data, Horvath found that students who spent more than 6 hours a day on screens at school scored roughly two letter grades lower than peers who used none.
The need to incorporate informed consent
I love my laptop, my smartphone and even AI on occasion. This is not about rejecting technology wholesale. It is about how quickly powerful tools are being introduced into children’s lives without research providing an evidence base, informed consent or meaningful guardrails.
It goes without saying, but it is worth repeating here: Children’s brains are different from adults’. Giving adults AI to speed up thinking is not the same as giving it to children who still need the repeated, often challenging, practice to develop their neural pathways.
It’s not that I want to throw out the digital baby with what might be unhealthy bathwater. It’s about not letting our children swim in untested water.
In medicine and psychology, informed consent is foundational. Parents are told what is known, what is not known, and what risks may exist before trying any new intervention that involves their child. Participation is voluntary, and families have the right to decline.
With AI in schools and homes, there is no such process. No disclosure of unknowns or meaningful ways to opt out. Children are being exposed to tools designed largely by profit-driven companies, without parents being fully informed, let alone asked.
It sometimes seems that schools are more likely to become embroiled in efforts to ban books, restrict health education or challenge evolutionary science than to meaningfully question AI use in classrooms. Parents often tell me they have heard that AI is where the jobs of the future lie, so of course they want their children exposed to it. But an important question remains: Are kids learning how to use AI, or is their learning quietly being co-opted by it?
If we were serious about doing this responsibly, we would study AI the way we study other developmental interventions. We would compare groups of students with and without routine AI use over time and track outcomes that actually matter, such as reading comprehension, writing fluency, attention, persistence and independent problem-solving.
Instead, convenience, efficiency and market momentum are driving the integration.
Don’t blame the kids, or their parents
I worked with a student who admitted to cheating throughout high school and into college. He told me how AI solved his terror of “the empty page.” He loved his facility with AI, because every paragraph sounded smooth and competent. But he recently left college, not with a degree, but with a panic disorder. His crash and burn, he said, came when teachers asked him to talk through his papers out loud. He was flooded with anxiety and described himself as feeling like a “100 percent imposter.”
Let’s be clear. This is not about blaming kids. Teenagers are doing exactly what teenagers do. They find shortcuts, test boundaries and reduce stress. School is one of the biggest pressure points in teen life, and AI offers relief. Of course they are using it. They did not create these tools or the incentives behind them. They are adapting to the environment adults built for them.
The real question is who is willing to slow this down at all. Based on what I frequently see in my practice, most parents are largely unaware of how deeply AI is already embedded in their child’s schoolwork, or they are reluctant to intervene because they have been told that mastering AI is essential for future jobs in an automated economy. In that vacuum of clarity and leadership, no one is drawing a firm line to protect the developmental work that only kids can do for themselves.
Because if we allow AI to routinely replace the struggle of reading, writing, organizing and thinking, rather than occasionally supporting it, we risk raising young people who look competent on paper but are less practiced at sustained effort, less confident in their own reasoning, and more dependent on external tools to think for them.
And that matters — not just for school, but for adulthood. Learning how to tolerate confusion, wrestle with ideas and find one’s own words is not busywork. It is how minds are built.
Once those habits are wired, they are difficult to undo.
Questions parents should ask schools about AI
- When are students allowed to use AI and when are they expected to work independently? Ask for concrete examples. Vague assurances are less helpful than clear boundaries.
- Which core skills are students required to practice without AI support? In particular, ask about reading comprehension, writing drafts, math problem-solving and synthesizing ideas.
- How does the school distinguish between using AI to support learning versus replacing it? What guardrails are in place to ensure students still do the cognitive heavy lifting?
- What evidence guides the use of AI at different developmental stages? Ask what research the school relies on and what unknowns they are actively monitoring.
- How are parents informed when new AI tools are introduced, and do families have options to opt out? Transparency and choice matter, especially for younger students.
How to talk with your child about AI and schoolwork
The goal isn’t to interrogate or scare kids. It’s to open a thoughtful, ongoing conversation that keeps learning as the central value.
- Lead with curiosity, not accusation. Try: “I’m curious how AI fits into your schoolwork right now. When do you find it most helpful?” Avoid starting with surveillance or threats. Teens shut down quickly when they feel policed.
- Name the difference between help and replacement. Try: “There’s a difference between getting help and letting something do the thinking for you. Let’s talk about where that line is.” This frames the issue as judgment and skill-building, not rule-breaking.
- Focus on skill building, not cheating. Ask: “Which parts of an assignment do you think are meant to build your brain?” Connect the conversation to long-term abilities like writing clearly, explaining ideas out loud and sticking with hard problems.
- Be explicit about your family expectations. Teens need clarity. Try: “In our family, we expect you to do the first pass of thinking and writing yourself.” You can allow A.I. as a tool after effort, not instead of it.
- Acknowledge the pressure they’re under. Say: “I get why this is tempting. School is stressful, and A.I. makes things easier.” Empathy makes it far more likely they’ll be honest with you.
- Keep the conversation ongoing. AI is evolving fast. Treat this like social media or phones — not a one-time talk, but a series of check-ins as expectations and tools change.
These conversations work best when kids feel respected, understood and guided.
Lectures and shaming result in shut-downs. The message is simple: Learning how to
think still matters, even in an AI world.
Take action
Here is where parents may have more power than they realize. Parent-led movements have already pushed schools to rethink smartphone access and the mental health costs of unregulated tech use. A 2025 Pew Research Center survey found that roughly three-quarters of U.S. adults now support banning cellphones during the school day, a sharp rise that shows how quickly collective concern can translate into real policy shifts.
This is not a call to ban AI or pretend it does not exist. It is a call for a pause: a coordinated, collective pause to ask harder questions before scaling.
If we learned anything from the social media era, it is that silence is not neutral. When no one draws a line, children pay the price. A collective pause now may be the most responsible move we can make for the thinkers, learners and citizens our children are becoming.
More reading about AI’s impact on kids and families: |