Unless you’ve been meditating in a cave since spring, you are probably aware that the internet is freaking out about the impending robot uprising. Until recently the stuff of science fiction, artificial intelligence (AI) has suddenly become a middle school homework issue. In May, my child’s English teacher sent out an email that read:
“ChatGPT is not an acceptable way to complete assignments for Language Arts. It has pros and cons in other contexts, but in my class, I hope you’ll understand that when I find students using ChatGPT, they’ll see significant negative consequences in their grades.”
Parents must gain a better understanding of those pros and cons if we’re going to set healthy boundaries around artificial intelligence. Because while parents are trying to understand what ChatGPT even is, kids are already using it.
What is ChatGPT?
ChatGPT is an artificial intelligence chatbot developed by the company OpenAI. Think of it as Alexa or Siri with an advanced degree (although the technically minded are quick to point out that it works differently under the hood). Released in November 2022, ChatGPT is technically still in the testing phase and currently free to use, but a paid subscription option is available and may be required in the future. Although ChatGPT gets most of the press, it’s not the only generative AI out there. There are at least a half dozen would-be competitors in different stages of evolution, including ChatSonic and Google’s Bard. There is even one marketed to kids, PinwheelGPT, which comes with parental controls. All of them are designed to answer questions and follow instructions to complete tasks such as writing emails, articles, code — and homework assignments.
In one “interview,” ChatGPT explained itself:
“AI is like having a super-smart computer friend. It learns from information and makes smart decisions. The more it learns, the better it gets at solving problems. But just like people, it can make mistakes and show biases.… AI is a tool that requires careful development, responsible use, and ongoing monitoring.... It is crucial to consider ethical implications, address biases, and ensure transparency and accountability in AI systems to harness its potential for the benefit of society.”
When asked how it would affect teenagers’ lives in the future, ChatGPT replied:
“AI will bring both opportunities and challenges for teenagers. By embracing AI, developing complementary skills, and understanding its implications, teenagers can position themselves to thrive in a future where AI plays an increasingly prominent role in various aspects of life.”
Despite the hyperbole about the revolutionary impacts of AI, for parents, the areas of concern remain the same as for any other internet-related technology: data safety, inappropriate content and digital citizenship.
ChatGPT collects a lot of data from its users. Every single interaction with ChatGPT is recorded. It also collects geolocation data, IP addresses, transaction histories and all the cookies. Even if OpenAI only uses that data to train its AI, your information may not be secure. Users have found ways to query ChatGPT and get it to reveal user input. Personal data risks are not unique to ChatGPT (think of all those online personality quizzes), so parents need to teach kids not to share personal information — such as their full name and address — online.
Another parental concern with AI is the risk of exposure to inappropriate content. If you supervise your child’s use of AI, you can reduce the risk of inappropriate content by specifying kid-friendly answers in the prompts. Without such guidelines in the prompts, kids’ innocent queries can generate results that range from vaguely disturbing to wildly inappropriate. Aside from discovering violent imagery and hate speech, it’s almost inevitable that tweens and teens will use AI to generate dirty jokes, ask questions about sex they are embarrassed to ask an adult or look for porn.
That leads to a new kind of inappropriate content risk: misinformation and falsified images. In her podcast Brave Writer, Julie Bogart warns against “the volume of misinformation that can be created because ChatGPT does not vet what it offers you. It just acts as a crawling tool to collect whatever’s out there. So if there’s a lot of misinformation on a topic, it’s just going to gather it and put it in nice paragraphs for you.”
As Bogart points out, generative AI is really just very advanced predictive text, but sometimes it really does generate something new — and wholly inaccurate. Known as hallucinations, AI-generated falsehoods have even included professional-looking citations of references that don’t exist.
Although it would be possible to watermark AI-generated content, there doesn’t seem to be much interest from AI companies to do so. Companies like OpenAI claim that guardrails blocking hate speech and criminal advice from being put into their algorithms should be sufficient. But tests have shown that these guardrails are getting weaker. The newest version of ChatGPT spreads more misinformation than the last one. ChatGPT may be able to teach your kids how the water cycle works, but it can also explain how the lunar landing and 2020 election results were faked. Teaching kids media literacy skills to recognize when algorithms are feeding them false information should be a top priority both at home and at school.
Bullying is an age-old problem that has pushed itself into the digital world, and bullies are already finding ways to use ChatGPT to send abusive messages and spread rumors. AI can generate and distribute en masse nearly untraceable anonymous messages — and worse, messages made to look as though they came from someone else. AI art can be used to create embarrassing deepfake images of classmates almost as easily as funny memes.
When it comes to homework, ChatGPT raises both educational and ethical concerns. Seattle Public Schools eighth-grade English teacher Nichole Lau explained in her email to parents, “An AI-generated paper is basically equivalent to cutting and pasting text off the internet, or to not doing the assignment at all. We’re learning by doing. Students who aren’t writing aren’t learning.”
The technological context may make it harder for kids to realize that turning in homework you didn’t generate yourself is cheating. But AI-generated homework has a second ethical issue: It could also be considered stealing. A classmate who slips you the answers for a test is complicit in your cheating. But when AI recombines information on the internet to generate new art or history reports, the original author has no say in it.
In her podcast notes about ChatGPT, Bogart writes, “There are many unsettling questions about what this means for creators and writers. Where is it getting its information? Who could you unknowingly be plagiarizing by using it? As a structure, it’s appropriative.”
It’s a point on which reasonable people can disagree. Some people feel that this is not essentially different from human creativity; that there is nothing new under the sun and most of our so-called original ideas are simply iterations of things that others created before us. It is incredibly valuable to have a discussion about originality and plagiarism with your child before letting them use AI. Even very young children can explore this question when you ask them, “Where did you get the idea for this drawing?” and “How would it feel if someone copied your story but changed some parts?”
It’s always tempting to avoid complex and problematic topics. Some local schools have already tried to ban ChatGPT. But just as comprehensive sex ed is a better approach than silence, it’s better to teach kids about technology than to avoid it. Kids need these lessons because technology is going to be a significant part of their adult life. And generative AI does provide opportunities for learning. Whether you’re looking for recipes or DIY instructions, ChatGPT can be much more efficient — and less distracting — than sorting out practical guidelines from personal stories on blogs and YouTube channels.
Used wisely, AI can be helpful in doing homework, too. Apparently, ChatGPT is better than search engines when you’re looking for answers to a specific question, and it’s great for finding books when you don't remember the title. Kids have used it as a “proofreader” to identify bugs in their coding projects. There are even appropriate ways to use AI in writing some of those language arts essays. A student working on an essay could ask ChatGPT, “What would someone who disagrees with my thesis say?” and then address those arguments in their paper. From creative writing prompts to personalized bedtime stories, there are myriad ways that AI can be used mindfully for learning. The New York Times presented a list of 35 creative possibilities.
A healthy approach
Fortunately, you don’t really need a deep technological understanding of artificial intelligence to parent around it. For parents, the principles and issues raised by AI programs like ChatGPT are the same ones we’re already dealing with. General parental best practices for the internet should be extended to ChatGPT and other sources of AI. That means it should only be accessible to younger kids through a parent’s account and with direct supervision that tapers off with age and practice. If you haven’t already established clear boundaries and guidelines for your kids’ cell phone and internet use, now is the time to do so.
As ChatGPT itself said, young people need to develop the “complementary skills” to responsibly use AI. Chief among these is critical thinking. Not only will critical thinking skills help young people evaluate the ethical questions that arise from the use of AI, but it can also help them identify and resist the misinformation that AI seems poised to spread.
Kids who have learned empathy, good citizenship and “upstander” power in person will be less likely to engage in cyberbullying and more likely to recognize it and call it out when they see it. On the flip side of empathy, small children anthropomorphize everything, and since AI is specifically designed to sound like a person, it’s easy for even older users to start thinking of it as someone, rather than something. But kids need to understand that chatbots are not human or even sentient, and it is not a good idea to ascribe human intent or values to them. As Mr. Weasley reminded Ginny in “The Chamber of Secrets,” you should never trust anything that can think for itself if you can't see where it keeps its brain. Talk to your kids about reliable versus unreliable sources, and how the content they see online is subject to the biases of its creators and to algorithms that often favor the most extreme views.
Kids also need to understand that what happens in ChatGPT does not stay in ChatGPT. Nobody knows where their OpenAI data goes or who might get hold of it. So, like everything else on the internet, kids should assume that it could show up anywhere. It should also be among a family’s internet rules that parents have access to ChatGPT logs.
It’s natural for some people to have knee-jerk reactions against new technologies, while others are unquestioningly enthusiastic about them. But technology is just a tool, and we need to remember who is in charge. Tools can be dangerous, but approached mindfully with appropriate preparation, families can use artificial intelligence to promote their kids’ education and even have fun with it. And if you’re really not a fan of ChatGPT, maybe you could use the CatGPT (the GPT-Meow version of ChatGPT) as a distraction that your kids will prefer over the original.
Parents’ Guide to Critical Thinking – Broken down by age group, the Reboot Foundation’s guide gives parents a better understanding of critical thinking and information on how to help their children develop critical thinking skills.
PinwheelGPT – Free kids’ version of an AI chat app with parental controls.
ChatGPT in School – An online training program for teachers that prepares them to lead “Day of AI” activities in their classrooms.