Skip to main content

Espelage joins APA advisory panel on report calling for guardrails, education to protect adolescent AI users

Portrait of Dorothy Espelage

Leading anti-bullying and school safety researcher Dorothy Espelage contributes to a report that cites benefits and dangers of new technology.

According to a report released June 3 by the American Psychological Association, the effects of artificial intelligence on adolescents are nuanced and complex, and developers must prioritize features that protect young people from exploitation, manipulation, and the erosion of real-world relationships. Dorothy Espelage, Ph.D., William C. Friday Distinguished Professor of Education and a global leader in bullying prevention and child well-being research, joined an expert advisory panel to write the report.  

“AI offers new efficiencies and opportunities, yet its deeper integration into daily life requires careful consideration to ensure that AI tools are safe, especially for adolescents,” according to the report, entitled “Artificial Intelligence and Adolescent Well-being: An APA Health Advisory.” “We urge all stakeholders to ensure youth safety is considered relatively early in the evolution of AI. It is critical that we do not repeat the same harmful mistakes made with social media.” 

The report follows two other APA reports that provide recommendations on social media use in adolescence and  healthy video content. Espelage also served on the advisory panel that provided social media recommendations. 

The AI report notes that adolescence – which it defines as ages 10-25 – is a long development period and that age is “not a foolproof marker for maturity or psychological competence.” It is also a time of critical brain development, which argues for special safeguards aimed at younger users. 

“Like social media, AI is neither inherently good nor bad,” said APA Chief of Psychology Mitch Prinstein, Ph.D., a professor in the UNC Department of Psychology who spearheaded the report’s development. “But we have already seen instances where adolescents developed unhealthy and even dangerous ‘relationships’ with chatbots, for example. Some adolescents may not even know they are interacting with AI, which is why it is crucial that developers put guardrails in place now.” 

The report makes a number of recommendations to make certain that adolescents can use AI safely. These include: 

  • Ensuring there are healthy boundaries with simulated human relationships. Adolescents are less likely than adults to question the accuracy and intent of information offered by a bot, rather than a human. 
  • Creating age-appropriate defaults in privacy settings, interaction limits, and content. This will involve transparency, human oversight and support and rigorous testing, according to the report. 
  • Encouraging uses of AI that can promote healthy development. AI can assist in brainstorming, creating, summarizing and synthesizing information – all of which can make it easier for students to understand and retain key concepts, the report notes. But it is critical for students to be aware of AI’s limitations. 
  • Limiting access to and engagement with harmful and inaccurate content. AI developers should build in protections to prevent adolescents’ exposure to harmful content. 
  • Protecting adolescents’ data privacy and likenesses. This includes limiting the use of adolescents’ data for targeted advertising and the sale of their data to third parties. 

The report also calls for comprehensive AI literacy education, integrating it into core curricula and developing national and state guidelines for literacy education. 

“Many of these changes can be made immediately, by parents, educators, and adolescents themselves,” Prinstein said. “Others will require more substantial changes by developers, policymakers, and other technology professionals.” 

The following story was adapted from an American Psychological Association (APA) health advisory. 

+ Share This