Photo:
While AI can make tasks more efficient for students, experts warn it shouldn’t replace the broader work of learning. Photo: iStock
A third grader grabs her laptop from her locker and settles in at a table, selecting a core subject from her dashboard. No teacher stands at the head of the class. Her classmates are scattered around the room, sitting at other tables or lounging on couches, all of them locked in on their own morning lesson, personalized for them by AI. Two hours later, they leave the classroom, finished with academics for the day.
AI-driven education sounds like science fiction, but it’s already here. One network, Alpha School, operates teacher-free campuses in more than a dozen U.S. cities. It raises the question: What do we gain and what do we lose when AI takes a leading role in education?
A tool — or a replacement?
“When designed for classrooms, student-facing AI can support learning in meaningful, age-appropriate ways,” says Keanon O’Keefe, senior product manager for MagicSchool, who responded to our questions via email. MagicSchool is an EdTech platform designed to be used by educators (not replace them) in the classroom. He adds: “At the same time, district leaders, educators and families are right to ask tough questions about how these tools are built, how students experience them, and how to approach the responsible use of AI in education.”
In O'Keefe's view, the distinction matters: "In schools, AI should function as instructional software, supporting academic tasks and reinforcing learning while keeping teachers at the center," O'Keefe says. "Many consumer AI tools are designed to maximize engagement. Schools have different priorities."
Not everyone is convinced. “In its current form, generative AI has no business being in the classroom,” says Emily Cherkin, speaker, teacher and author of the book “The Screentime Solution.” “It is unproven, untested and unsafe for use by children.
“Parents would never be okay with doctors just administering a new drug to all patients without vetting, rigorous testing and safety protocols,” Cherkin says, “and that's what we're doing with AI.”
The limits of AI learning
Onur Bakiner, political science professor and Technology Ethics Initiative director at Seattle University, says that for AI to be helpful as an educational tool, we first must recognize its limitations. His book “Governing AI: A Primer” identifies strategies to mitigate the risks and harms of AI.
“AI can be helpful, but it cannot be sufficient,” he says. AI proponents “should be able to say it's going to help achieve these concrete goals, and they should also be able to talk about what it cannot do. If you go to a hardware store to buy something, and somebody tells you that this one tool is going to solve all your life's problems, you would think that's a lie and you wouldn't buy it.
“AI is really good at pattern matching, pattern recognition, machine translation, and for certain tasks, it really is an engine of efficiency and speed. So, if the task requires any of these values, AI can be helpful.”
AI can accelerate certain tasks, but it doesn’t replace the broader work of learning.
Do the results hold up?
If AI is effective, student outcomes should reflect it. Alpha School points to strong test scores — but critics say the data is less convincing than it appears.
“It has everything to do with the fact that they are a private school that can self-select their student body,” Cherkin says. Education advocates have long argued that test scores are often a proxy for socioeconomic status rather than educational effectiveness.
More broadly, there is little long-term evidence that tech-heavy learning improves outcomes. In one of the few extended experiments, Sweden replaced textbooks with laptops and tablets in 2009 and is now reversing course, bringing back printed materials for core subjects.
Even beyond questions of outcomes, the technology itself remains imperfect. AI systems can “hallucinate,” generating confident but incorrect information because they predict likely word sequences rather than accessing verified knowledge.
“Imagine a calculator that is correct 99 percent of the time,” says Bakiner. “I don't think anybody would use that calculator.”
“A big part of education is having students acquire critical thinking skills,” he says. “That process has to be full of friction, where the student encounters difficulties and then has to work through those difficulties on their own. I don't think AI can replace that or expedite that. I don't think AI can really teach the deeper sort of meaning. That's where you will always need a human teacher.”
What is the purpose of school?
At its core, the debate over AI in education comes down to what we think school is for.
“One view treats education as skills acquisition — a finite set of facts and abilities to load onto students’ brains. If that were true, AI might be sufficient,” Bakiner says. “But in reality, education is also about socialization, learning through friction, developing critical thinking and navigating challenges alongside others. When you put all of that together, it’s far more complex.”
Even within a narrower, skills-based view, critics argue AI may work against learning.
“What we're seeing, even with adults using AI, is cognitive offloading,” says Cherkin. “We're not using parts of our brain [engaged in traditional learning]. We don't absorb that information in the same way. What are we teaching children in school? How to write and how to think. So, if we're letting predictive text reply for them, they are not thinking.”
The two-hour model
Alpha School declined an interview, but its website indicates they realize the limitations of AI learning. Their AI-powered learning model only involves two hours at the computer during the school day. The rest of the day is spent in workshops that emphasize project-based, experiential learning.
“I totally agree that children should be doing more of that in school,” says Cherkin. “But I firmly disagree that having AI teach academics offers a better learning experience than a human teacher. It's a marketing gimmick; it sounds impressive to say kids learn in two hours what kids normally do in six hours.”
But kids don’t sit at desks for six hours straight at most schools. And as both Bakiner and Cherkin point out, learning is a complex process that can only speed up so much.
Beyond the classroom: Power and equity
For critics like Cherkin, the concern goes beyond learning outcomes and into who controls education itself. “This comes down to the privatization of public education, where people see schools as a business. Ed-tech products and social media companies rely on user engagement and data. They collect a vast amount of personal data and take other people's work and regurgitate it as knowledge,” she says.
“I'm not defending public education as a flawless system. There are major problems. But I don't think it's any secret that the implementation of AI is very intentionally to displace teachers. The schools that have resources or extra PTA funding are going to hire specialist teachers and the schools that don’t are going to get chatbots,” says Cherkin. Alpha School workshops are led by “guides” who are not required to have state teaching licenses.
Bakiner agrees that human teachers are essential to good outcomes for kids. “Especially for kindergarten through twelve, there's a huge emotional labor side of things that cannot be forgotten. [Teachers] don't just mechanically teach skills. They are very much attuned to their students’ struggles, family issues, social and psychological difficulties. Humans are way more attuned to these kinds of difficulties than probably any machine will ever be.”
Bakiner adds, “Sometimes AI tries to please users or keep the user engaged by not pushing back. But a big part of education is precisely that push back, providing critical reflection and feedback.”
That pandering approach also contributes to AI’s addictiveness and can be especially harmful for children, says MagicSchool's O’Keefe: “As AI becomes more conversational, the line between learning support and something more personal can start to blur. The question isn’t whether AI belongs in schools, but how to ensure AI supports learning while keeping clear, healthy boundaries for students.”
Where AI fits and where it doesn’t
O’Keefe points to a few baseline guardrails schools should follow:
- AI must not act as a friend, confidant or emotional substitute
- Tools should have clear purpose, boundaries and teacher visibility
- Systems must redirect students to human support
Healthy screen time limits and protections against access to unsafe, violent or sexual content are obviously critical, but enforcing them can be spotty. Cherkin relates the story about Typing Club, an otherwise innocuous app that also displayed multiple online gambling ads. A child’s mom reported it to the principal, who was shocked, because they thought the school had the paid, ad-free version. “Here's another equity problem,” says Cherkin. “The safer versions of these products are going to be only for those who can pay for it.”
But before parents and districts can establish guardrails and boundaries, they need to know what is already being used. AI is embedded in so many products, from web browser search results to tools like Canva, it’s almost impossible to identify how kids are using AI.
Teaching kids to question the machine
Given how embedded AI already is, awareness is only the first step. The harder task is teaching kids when not to trust what they see. As Arthur Weasley warns in “Harry Potter and the Chamber of Secrets”: “Never trust anything that can think for itself if you can't see where it keeps its brain!”
Resources for families
More on AI issues facing families: |