More than half of teenagers in the United States have created or received sexually explicit images generated by artificial intelligence (AI), according to a new study. The findings reveal how rapidly AI tools are being adopted for what researchers call “nudification” – the creation of realistic nude images of individuals, often without their consent. This trend presents complex legal and ethical challenges, as the images themselves can be illegal even if they do not depict real people.
AI-Native Generation and Consent Issues
Researchers at George Mason University interviewed 557 U.S. teens (ages 13–17) about their experiences with AI-generated sexual content. The study found that 55.3% had used AI tools to create such images, either of themselves or others, while 54.4% reported receiving them. Alarmingly, 36.3% said someone had created a non-consensual nude image of them, and 33.2% stated that such images had been distributed without their permission.
“Teens are no longer just digital natives but AI-natives,” says Chad Steel, the study’s lead author. “’Nudification’ and GenAI apps are their new ‘sexting’, only with more challenging issues surrounding consent.”
Gender Disparities and Victim Impact
The study revealed that male participants were more likely to create explicit images, regardless of consent. Victims of AI-powered sexual exploitation reported similar trauma to those experiencing other forms of child sexual abuse, including fear, hypervigilance, social media avoidance, and a sense of powerlessness. The lasting psychological impact can disrupt their lives permanently.
Implications and Next Steps
The prevalence of AI-generated sexual images among teens underscores the need for urgent action. The ease with which these tools can be used to exploit individuals, combined with the difficulty of enforcing consent in this context, raises significant legal and ethical concerns. Policymakers, tech companies, and educators must work together to address this emerging threat, protect vulnerable youth, and ensure that AI development aligns with responsible use.
The study highlights how quickly generative AI has normalized harmful behaviors among young people, demonstrating that the technology’s potential for abuse outpaces its regulation.



























