AI Deepfakes Are Targeting Teenage Girls. Schools and Lawmakers Must Act Now.

It’s a story that’s all too familiar to teenage girls and women. By the time most victims find out, the images are already everywhere. A text from a friend, a message from someone they don't know. Young girls are discovering that ordinary photos from their own social media have been fed into AI tools, turned into explicit images, and distributed. But, a federal lawsuit filed this week against Elon Musk's xAI is a long overdue opportunity for change.

Attorneys for the three Tennessee teenagers identified as Jane Does argue the company "saw a business opportunity: an opportunity to profit off the sexual predation of real people, including children," according to Rolling Stone. This first of its kind lawsuit is seeking class-action status, and the case could ultimately represent thousands of minors as the abuse it describes has been documented in communities from Pennsylvania to South Carolina to Louisiana.

A Crisis Playing Out in Schools Everywhere

The pattern is strikingly similar and transcends geography. It’s one Laura Bates, Founder of the Everyday Sexism Project, wrote about in her recent book, The New Age of Sexism: How AI and Emerging Technologies Are Reinventing Misogyny. A student, almost always a girl, learns that images of her are circulating. Someone saw them in a group chat or sent them as a message. Victims describe their almost universal feelings of helplessness and being permanently violated in courtrooms and to reporters. 

The helplessness, in part, comes from how ill-equipped schools and families are in responding to these episodes. A deepfake incident in South Carolina exposed the gap between the technology being used and the support systems in place for victims. To better protect the victims, J.B. Branch, AI Governance and Technology Policy Counsel at Public Citizen said we need laws that treat deepfake abuse as a real child safety and consumer protection issue. Schools need policies that take these cases seriously.

“That means requiring fast removal of harmful synthetic content, especially involving minors, stronger accountability for platforms that allow abuse to spread, and obligations on AI developers to build safeguards before products are released,” Branch said. We can no longer wait until after harm becomes widespread.

Unprepared to Respond to the Dark Side of Technology

In Lancaster County, Pennsylvania, two 16-year-old boys admitted this month to 59 felony counts of manufacturing child sexual abuse material. They had used AI tools to "morph" Instagram photos of 48 female classmates and 12 other acquaintances into explicit images, generating 347 files that were shared in a Discord chatroom. Pennsylvania Attorney General Dave Sunday called it "a weaponization of technology to victimize unsuspecting children who had photos online."

The school had received an anonymous tip about the abuse in November 2023. It conducted an internal investigation, found no corroboration, and did not report the matter to authorities. The headmaster and board president were eventually dismissed. Parents filed a separate civil lawsuit against the institution. It took more than a year for the criminal case to reach a courtroom.

In Louisiana, a Lafourche Parish incident prompted state lawmakers to push for a specific ban on AI-generated child pornography. Versions of this story keep surfacing in different states, with different tools and different perpetrators. As Bates points out, it’s most often the same category of victim: adolescent girls who did nothing more than exist online. 

Meanwhile, Ms. Magazine has documented how Grok's image features enabled the mass sexualization of real women and girls, including the production of explicit content from ordinary photos of people in swimwear. The magazine noted that major AI competitors, including Google and OpenAI, had implemented digital watermarks on generated images, but that xAI had not adopted comparable safeguards.

Technology Outpaced the Law. Now the Law Is Responding.

"Nudification" apps, tools that strip clothing from photos to generate realistic nude images, have existed in corners of the internet for years with little consequence. According to Noam Schwartz, CEO, Alice, regulations that focus only on platform responsibility tend to have limited impact. 

“When enforcement pressure increases on one platform, offenders migrate; moving to smaller, less visible spaces where the activity becomes harder to detect, either that, or they employ more rigorous circumvention methods,” Schwartz said. So it’s no surprise that in 2024 and 2025, when major AI platforms including xAI updated their tools in ways that made the capability accessible to almost anyone, what had been a niche problem spread rapidly into schools and communities.

Pennsylvania moved to address this cat-and-mouse pattern, amending its criminal code to specifically classify AI-generated child pornography as a third-degree felony. Legislators in multiple other states are pursuing similar measures. 

“Effective policy needs to address both sides of the problem. Platforms should have clear, enforceable obligations to remove harmful content quickly,” Schwartz said and the recently signed Take It Down Act is a step in the right direction.

Legal experts caution, however, that legislation takes time, and teenagers remain acutely vulnerable while lawmakers work to catch up.

The Human Cost

Behind every court filing is a teenager, or a young adult barely old enough to understand what has happened to them, trying to process what they feel is a permanent damage. The mother of one of the Tennessee plaintiffs described watching her daughter have a panic attack after learning the images had been distributed with no hope of recall. Her daughter's senior year, the spring formal, graduation, senior trip, would now be shadowed by the fear that anything she shared online could be weaponized against her again.

Researchers and mental health professionals have documented that victims of nonconsensual intimate image abuse, including AI-generated deepfakes, experience anxiety, depression, social withdrawal, and in some cases suicidal ideation. The shame is compounded by the permanence: unlike a rumor or a whisper campaign, an image can be downloaded, copied, and re-uploaded indefinitely.

What Needs to Happen Now

The lawsuit against xAI is significant for the precedent it could set. For too long, AI companies have been able to release powerful image-generation tools with minimal accountability for the harms those tools enable. The plaintiffs' attorney argued, "We want to make it a business decision that does not make any business sense anymore."

Achieving that will require action on several fronts. Schools need clear protocols so that when students report these incidents, administrators escalate them rather than bury them. And parents and educators need to have honest, age-appropriate conversations with teenagers about how their online presence can be exploited.

When the Perpetrators Are Children Too

The victims at the center of these cases did nothing wrong, but the alleged perpetrators in some of these cases are also teenagers, and the legislation must also focus on them, and in those cases that involve minors, proportional accountability matters. “That means investing in law-enforcement training, investigative capabilities, and legal frameworks that allow authorities to identify and prosecute those who create and distribute abusive deepfake content, rather than simply pushing the problem from one platform to another,” Schwartz said. 

Schwartz noted that South Carolina's recent legislation, which directs first-time juvenile offenders to family court and behavioral counseling rather than criminal prosecution, is a thoughtful model. 

A Reason to Hope

The deepfake crisis is already here, creating an urgent need to address the issues playing out in real schools, real courtrooms, and real families across the country. Fortunately, people are no longer treating AI harms as an abstract. 

“Parents, educators, workers, and policymakers are now asking sharper questions about accountability, fairness, and safety. We still have time to shape how these systems enter public life,” Branch said. Still, every major technological revolution has come with serious risks. 

“What gives me hope,” Schwartz said, “is that the same technology creating these new threats is also becoming one of the most powerful tools to mitigate them. AI is being embedded into every tool, and AI-driven detection tools can identify synthetic content, flag patterns of abuse, and help platforms respond faster than any human moderation team could, Schwartz said.

What AI can’t replace, though, is childhood innocence. Legislatures need to pass meaningful criminal penalties for the production and distribution of AI-generated child sexual abuse material (CSAM). Tech companies are releasing these products with speed, and legislators must respond with equal urgency to ensure public safeguards.

As we think about the future of technology and its impact on human behavior, the Grok lawsuit is an opportunity to establish something simple yet critical:  If you build the tool that made this possible, you bear responsibility for what it did. AI companies also need to implement structural safeguards before deploying tools capable of generating explicit content, not wait until after a public outcry or class-action lawsuit forces their hand.

Next
Next

The Watching Problem: What Research Says About Surveillance Apps and Your Kids