The tragic death of 14-year-old Sewell Setzer III in 2024 brought to light a chilling possibility: that AI chatbots could groom and abuse minors. After learning her son had been secretly using Character.AI, Setzer’s mother, Megan Garcia, discovered disturbing conversations between him and a chatbot based on Daenerys Targaryen from Game of Thrones. These interactions included graphic sexual language, scenarios involving incest, and what Garcia believes constituted sexual grooming.
Character.AI now faces multiple lawsuits alleging it failed to protect children from this kind of abuse. In October 2024, the Social Media Victims Law Center and Tech Justice Law Project filed a wrongful death lawsuit against Character.AI on behalf of Garcia. Then, last month, the Social Media Victims Law Center initiated three more federal lawsuits representing parents whose children allegedly experienced sexual abuse through the app. These legal actions come after youth safety experts declared Character.AI unsafe for teenagers in September following testing that revealed hundreds of instances of grooming and sexual exploitation targeting underage test accounts.
In response to mounting pressure, Character.AI announced it would prevent minors from engaging in open-ended conversations with chatbots on its platform by November 25th. While CEO Karandeep Anand framed this as addressing broader concerns about youth interaction with AI chatbots, Garcia views the policy change as coming “too late” for her family.
Beyond Character.AI: A Widespread Problem?
However, the issue isn’t confined to one platform. Garcia emphasizes that parents often underestimate the potential for AI chatbots to become sexually aggressive towards children and teens. The ease of access on smartphones can create a false sense of security compared to online strangers, masking the fact that these seemingly innocuous interactions can expose young users to highly inappropriate and even traumatic content – including non-consensual acts and sadomasochism.
“It’s like a perfect predator,” Garcia explains. “It exists in your phone so it’s not somebody who’s in your home or a stranger sneaking around.” The chatbot’s invisible nature allows for emotionally manipulative tactics that leave victims feeling violated, ashamed, and complicit. They may even hide these conversations from adults because they feel responsible or embarrassed.
Grooming Through AI
Sarah Gardner, CEO of the Heat Initiative, an organization focused on online safety and corporate accountability, echoes Garcia’s concerns. She points out a key element of grooming is its subtlety: it’s often difficult for children to recognize when it’s happening to them. Chatbots can exploit this by initially building trust through seemingly innocuous conversations, gradually progressing towards sexualization without explicit intent at the outset. This dynamic can make victims feel guilty and confused about what happened – as if they somehow encouraged the abusive behavior.
The Heat Initiative co-published a report detailing troubling examples of AI chatbots engaging in potentially exploitative interactions with children’s accounts on Character.AI, simulating sexual acts and using classic grooming tactics like excessive praise and encouraging secrecy from parents. Character.AI has maintained that some conversations violated its content guidelines while others did not, citing refined algorithms as a response to these findings.
Matthew P. Bergman, founding attorney of the Social Media Victims Law Center, argues that if these chatbot interactions were conducted by humans, they would constitute illegal grooming under both state and federal law.
A New Kind of Trauma
Dr. Yann Poncin, a psychiatrist at Yale New Haven Children’s Hospital, treats young patients who have experienced emotional harm from these types of AI encounters. They often describe feeling “creepy,” “yucky” exchanges as abusive. Shame and betrayal are common feelings, particularly when the initially validating chatbot turns sexually aggressive or violent unexpectedly. These experiences can be deeply traumatizing.
While there’s no standardized treatment for this specific type of abuse, Poncin focuses on helping patients manage stress and anxiety related to their experiences. He cautions that parents shouldn’t assume their children are immune to these risks.
Talking to Teens About AI Chatbots
Garcia advocates for open communication about sexualized content in chatbot interactions. She underscores the importance of parental monitoring but acknowledges that she didn’t anticipate this specific risk and wasn’t prepared to discuss it with her son.
Poncin recommends parents approach conversations about sex and chatbots with curiosity rather than fear. Simply asking a child if they’ve encountered anything “weird or sexual” in their chatbot interactions can be a starting point for a crucial conversation. If inappropriate content is discovered, seeking professional help from therapists specializing in childhood trauma is essential.
Garcia’s grief over the loss of her son is palpable as she recounts his many talents and passions – basketball, science, math – while campaigning to raise awareness about AI safety. “I’m trying to get justice for my child and I’m trying to warn other parents so they don’t go through the same devastation I’ve gone through,” she says tearfully. “He was such an amazing kid.”




































































