AI Is Still an Unknown Country — and Teens Are Its Pioneers

Artificial Intelligence

AI Is Still an Unknown Country — and Teens Are Its Pioneers

New study suggests adolescents worry about using AI ethically, but they don’t know what the rules are.

By Maggie Hicks     Jun 13, 2025

AI Is Still an Unknown Country — and Teens Are Its Pioneers

When artificial intelligence tools like ChatGPT were first introduced for public use in 2022, Gillian Hayes, vice provost for academic personnel at the University of California, Irvine, remembers how people were setting up rules around AI without a good understanding of what it truly was or how it would be used.

The moment felt akin to the industrial or agricultural revolutions, Hayes says.

“People were just trying to make decisions with whatever they could get their hands on.”

Seeing a need for more and clearer data, Hayes and her colleague Candice L. Odgers, a professor of psychological science and informatics at UC Irvine, launched a national survey to investigate the use of AI among teens, parents and educators. Their goal was to collect a broad set of data that could be used to continuously investigate how uses and attitudes toward AI shift over time.

The researchers partnered with foundry10, an education research organization, to survey 1,510 adolescents between 9 and 17 as well as 2,826 parents of K-12 students in the United States. They then ran a series of focus groups, which included parents, students and educators, to gain a better understanding of what participants knew about AI, what concerned them and how it affected their daily lives. The researchers finished collecting data in the fall of 2024 and released some of their findings earlier this year.

The results came as a surprise to Hayes and her team. They found that many of the teens in the study were aware of the concerns and dangers surrounding AI, yet didn’t have guidelines to use it appropriately. Without this guidance, AI can be confusing and complex, the researchers say, and can prevent both adolescents and adults from using the technology ethically and productively.

Moral Compasses

Hayes was especially surprised by how little the adolescents in the survey used AI and the way they used it. Only about 7 percent of them used AI daily, and the majority used it through search engines rather than chatbots.

Many teens in the survey also had a “strong moral compass,” Hayes said, and were confronting the ethical dilemmas that come with using AI, especially in the classroom.

Hayes recalls one teen participant who self-published a book that used an AI-generated image on the cover. The book also included some AI-generated content, but was mainly original work. Afterward, the participant’s mom, who helped them publish the book, discussed the use of AI with the student. It was OK to use AI in this scenario, the mom said, but they shouldn’t use it for writing school assignments.

Young people often aren’t trying to cheat, they just don’t necessarily know what cheating with AI looks like, Hayes says. For instance, some wondered why they were allowed to have a classmate review their paper, but couldn’t use Grammarly, an AI tool that reviews essays for grammatical errors.

“For the vast majority of [adolescents], they know cheating is bad,” Hayes says. “They don’t want to be bad, they’re not trying to get away with something, but what is cheating is very unclear and what is the source and what isn’t. I think a lot of the teachers and parents don’t know, either.”

Teens in the survey were also concerned about how using AI might affect their ability to develop critical thinking skills, says Jennifer Rubin, a senior researcher at foundry10 who helped lead the study. They recognized that AI was a technology they’d likely need throughout their lives, but also that using it irresponsibly could hinder their education and careers, she says.

“It’s a major concern that generative AI will impact school development at a really developmentally critical time for young people,” Rubin adds. “And they themselves also recognize this.”

Equity a Nice Surprise

The survey results did not demonstrate any equity gaps among AI users, which came as another surprise to Hayes and her team.

Experts often hope that new technology will close achievement gaps and improve access for students in rural communities and those from lower income families or in other marginalized groups, Hayes says. Typically, though, it does the opposite.

But in this study, there seemed to be few social disparities. While it’s hard to tell if this was unique to the participants who completed the survey, Hayes suspects that it may have to do with the novelty of AI.

Usually parents who attended college or are wealthier teach their children about new technology and how to use it, Hayes says. With AI, though, no one yet fully understands how it works, so parents can’t pass that knowledge down.

“In a gen-AI world, it may be that no one can scaffold yet so we don’t think there’s any reason to believe that your average higher-income or higher-education person has the skills to really scaffold their kid in this space,” Hayes says. “It may be that everyone is working at a reduced capacity.”

Throughout the study, some parents didn’t seem to fully grasp AI’s capabilities, Rubin adds. A few believed it was simply a search engine while others didn’t realize it could produce false output.

Opinions also differed on how to discuss AI with their children. Some wanted to fully embrace the technology while others favored proceeding with caution. Some thought young people should avoid AI altogether.

“Parents are not [all] coming in with a similar mindset,” Rubin says. “It really just depended on their own personal experience with AI and how they see ethics and responsibility regarding abuse [of the technology].”

Establishing Rules

Most of the parents in the study agreed that school districts should set clear policies about appropriately using AI, Rubin says. While this can be difficult, it’s one of the best ways for students to understand how the technology can be used safely, she says.

Rubin pointed to districts that have begun implementing a color system for AI uses. A green use may indicate working with AI to brainstorm or develop ideas for an essay. Yellow uses may be more of a gray area, such as asking for a step-by-step guide to solve a math problem. A red use would be inappropriate or unethical, such as asking ChatGPT to write an essay based on an assigned prompt.

Many districts have also facilitated listening sessions with parents and families to help them navigate discussing AI with their children.

“It’s a fairly new technology; there are a lot of mysteries and questions around it for families who don’t use the tool very much,” Rubin says. “They just want a way where they can follow some guidance provided by educators.”

Karl Rectanus, chair of the EDSAFE AI Industry Council, which promotes the safe use of AI, encourages educators and education organizations to use the SAFE framework when approaching questions about AI. The framework asks whether the use is Safe, Accountable, Fair and Effective, Rectanus says, and can be adopted both by large organizations and teachers in individual classrooms.

Teachers have many responsibilities so “asking them to also be experts in a technology that, quite frankly, even the developers do not understand fully is probably a bridge too far,” Rectanus says. Providing straightforward questions to consider can “help people proceed when they don’t know what to do.”

Rather than banning AI, educators need to find ways to teach students safe and effective ways to use it, Hayes says. Otherwise students won’t be prepared for it when they eventually enter the workforce.

At UC Irvine, for example, one faculty member assigns oral exams to computer science students. Students turn in code they’ve written and take five minutes to explain how it works. The students can still use AI to write the code — as professional software developers often do — but they must understand how the technology wrote it and how it works, Hayes says.

“I want all of us old folks to be adaptable and to really think ‘what truly is my learning outcome here and how can I teach it and assess it, even in a world in which there’s generative AI everywhere?’” Hayes says, “because I don’t think it’s going anywhere.”

Learn more about EdSurge operations, ethics and policies here. Learn more about EdSurge supporters here.

More from EdSurge

Get our email newsletterSign me up
Keep up to date with our email newsletterSign me up