What Years of Teaching Taught Me About Vulnerability in the Digital Age

The lessons I’m still learning about how to live authentically is a world of deepfakes and artificial intelligence.

I stood in front of my classroom for nearly two decades. During the first several years of my teaching, I’d watch nervous freshmen file into my English classroom, armed with the same defense mechanisms I’d seen for years. They’d grab a seat in the back of the room, avoid eye contact, and slouch in their chairs–the posture that said don’t call on me. They were terrified to be vulnerable, and they just needed a little help finding some courage.

I’d spend the first months on the short story unit, intentionally designed to coax them out of those shells, taking little bets where they felt safe to raise their hand even if they might be wrong, safe to admit they didn’t understand The Masque of the Red Death.

By October, we’d usually get there. They’d start to think out loud, to ask questions, to risk being wrong, to try. Then came the digital age, and everything changed.

We Built This Ourselves

I remember the exact moment I questioned whether we had crossed a line. A student had come to tell me that she’d received a crude picture from a boy. He asked her to send her one in kind. It was the early 2000s and sexting was exploding. Adults tried to respond with threats of legal consequences–that exchanging nude pictures was tantamount to having child pornography on your phone. The fear of being arrested didn’t stop them.

And innovation marched forward at rapid speed. In 2011, my former Superintendent wanted to implement a 1:1 program to establish himself as a “technology leader.” In the fall, every student in the district would receive an iPad. Every teacher had to spend the summer figuring out how to use it. We had no professional development. There was no discussion of which apps were safe, what data they were collecting, how we would protect student PII. No one ever uttered the word ‘cybersecurity.’

Meanwhile, kids were growing tired of trying to hide inappropriate text messages from the peering eyes of their parents. Then Snap Chat hit the scene. To curb the risk of getting caught sharing images, Snap Chat allowed users to share pictures that would then disappear. The ephemeral nature of the image (and eventually videos and even chat messages) catapulted the company to widespread success in just two years, and by 2015, Snap Chat had approximately 75 million users, according to news from The Street.

When Technology Became a Weapon

While I can’t confirm the truth of the rumor, I remember a horrifying story that still haunts me today about a middle school girl who was caught on camera in the locker room after school hours. She was allegedly performing oral sex on a peer, who decided he’d get a snapshot of her in the act. But he didn’t keep the picture to himself. He shared it. Others shared it. And someone along the line, someone decided to print it out. They made photocopies. As the story goes, they hung the pictures up around town–even on the girl’s front door. Technology was being weaponized.

What we know to be true is that in 2012, an Ontario substitute teacher was investigated after a group of fifth-grade students claimed she had abused them. The accusations spread among parents, resulting in a formal investigation. After months of having her life upended and enduring the agony of deep scrutiny, she was cleared. The students made up the entire story. But this case wasn’t an anomaly. “What’s really sad in this is we’ve got adolescents making terrible accusations against teachers and getting off scot-free with absolutely no penalty whatsoever,” said then McGill University professor Jon Bradley, who knew not only of this case but another that actually ended in the falsely accused teacher committing suicide.

As technology has evolved and access to social media platforms has become more prolific, these personal attacks on teachers have only escalated. In July 2024, NPR reported on, “a number of middle school students in Pennsylvania who created 22 fake TikTok accounts impersonating teachers at Great Valley Middle School.” According to The New York Times, this was “the first known group TikTok attack of its kind by middle schoolers on their teachers in the United States.”

Approximately 20 teachers were targeted in what these students thought was a funny prank. They victimized their Spanish teacher Patrice Motz using a photo of her family at the beach. Superimposed text pasted over the image read, “Do you like to touch kids? Answer: Sí.” Other fake accounts showed teachers’ heads superimposed on partially naked bodies, fake sexual relationships between colleagues, and racist and homophobic content. Reputations sullied by mere suggestion without regard for the impact on human life.

While the superintendent reportedly recognized that he was disheartened, the Mainline Times reported, “the district had communicated with local law enforcement about the issue and that some of the students had been disciplined for their actions [but] there had been no arrests.”

These are all examples of the weaponization of information, and the culprits of these acts of defamation, harassment, and cruelty are children. We’ve given them these tools with little to no guidance of the impact of their misuse, yet we expect them to understand how to use technology responsibly.

Addressing the Deepfake Problem We Saw Coming

Many readers will recall the deepfake video of President Obama that went viral in 2021. Clearly a fake, people were dismissive of the threat. Fast forward a few years, and headlines from McAfee read, “The World’s Most Deepfaked Celebrities Revealed.”

It’s no surprise that the use of deepfakes has escalated, and the motivations for creating a deepfake have become much more diverse. Students naively thinking they are only pulling a harmless prank, but others are far more nefarious. Scorn, revenge, tomfoolery, and–of course–money are often primary motivations for targeting victims.

In the case of Dazhon Darien, he was a seemingly disgruntled employee who created a deepfake voice of his principal spewing racist and antisemitic hate speech. Months after he released the recording, law enforcement and forensics teams determined that the audio was generated by AI and also learned that Darien had searched for AI tools using the school’s internet. Darien was sentenced to four months in jail. All the while, most people who heard the recording believed what they heard. The principal was judged guilty by society before being afforded his right to be presumed innocent. His punishment feels far worse, particularly since he committed no crime.

Matthew McConaughey made news last week for trademarking his voice and likeness over deepfake concerns, a move that will likely inspire many others in Hollywood to do the same. Legislation is starting to catch up with the crime, first and foremost by recognizing that it is indeed a crime to create a non-consensual deepfake. In addition to state legislation that has been passed in an effort to quell disinformation campaigns and protect against deepfakes in elections, the US Congress passed the Take It Down Act in May of last year which mandates platforms, “implement a “notice and takedown” process: within 48 hours of notice, the platform must remove the content and take reasonable efforts to eliminate duplicates.”

The Real Lessons I’m Still Learning

The reality is that legislation can’t protect victims of broken trust, and it can’t repair a reputation damaged by false accusations. Being human means making yourself vulnerable. It’s impossible to engage in any relationship, personal or professional, without accepting the risk that something could go wrong. It’s why trust is so important. Truth, integrity, and honesty matter in the digital age more than ever before. Trusting someone, especially those who we only know or interact with most often behind a screen, is inherently risky.

I don’t want to live in a world where no one risks being honest, where everyone performs a carefully curated version of themselves, where genuine connection is sacrificed for self-protection — that’s not living. It’s conceding. So every day I I have to remind myself that good will prevail. And I selectively leverage the platforms that enable me to share that hope and find others who pray for the same.

We can’t hide from technology but we also don’t need to surrender to it. We have to find the courage to embrace innovation and teach our children how to use it responsibly. We have to be brave enough to tell the truth. In the words of Atticus Finch, real courage is, “when you know you’re licked before you begin, but you begin anyway and see it through no matter what.”

Most importantly, it’s telling the truth even when truth has become impossibly complicated. Even when the truth is risky. Even when no one believes you.

Next
Next

Teens, Social Media, and the Reckoning Parents Can’t Ignore