The Deepfake Threat Targeting Children in U.S. Schools
Schools in the United States are facing a growing safety and child-protection challenge as students use artificial intelligence technologies to transform innocent photos of classmates into sexually explicit deepfake images. Experts warn that such content can cause severe psychological trauma for victims and that educational institutions are not adequately prepared to address this emerging threat.
The issue drew national attention last fall after AI-generated nude images circulated within a middle school in Louisiana. While two students were charged in connection with the incident, the case sparked controversy when one of the victims was suspended following a confrontation related to the images. Authorities emphasized that AI has made the creation of such material alarmingly easy and urged parents to speak with their children about the risks and consequences.
According to the National Conference of State Legislatures, at least half of U.S. states enacted new laws in 2025 to address the use of generative AI to produce fabricated yet realistic images and audio. Students have been prosecuted in states such as Florida and Pennsylvania, while expulsions have occurred in California. In Texas, a teacher was charged with allegedly using AI to create child sexual abuse images involving his students.
Cyberbullying researchers stress that schools must update their policies to confront AI-enabled threats more effectively. They also highlight the critical role of parents in fostering a safe environment where children can discuss digital risks without fear of punishment. Experts argue that in the age of artificial intelligence, school safety can no longer be limited to physical spaces but must also include a comprehensive approach to digital safety.