Protecting Trust in the Age of Deepfakes and Disinformation

By T.J. Winick

Artificial intelligence has made it frighteningly easy to create convincing fake videos, audio clips, and images of anyone without their knowledge or consent. For U.S. schools, this isn’t some distant theoretical threat. A single piece of AI-generated disinformation can spread quickly, sow confusion, and damage hard-won trust with students, parents, faculty, alumni, and the broader community.

It’s easy to feel overwhelmed by that possibility. After all, you can’t control what a bad actor might fabricate or post online. But there is something you can control completely: your school’s internal and public response. And as Rebecca Emery of Seacoast AI and I shared recently with a group of independent school leaders, it’s often not the crisis itself that people remember most—it’s how and when the school chose to respond.

The Public Response Matters

There’s a principle borrowed from medicine that applies perfectly here: “Do no harm.” When disinformation hits, a clumsy or defensive response can make things worse. Silence can look like confusion or indifference. Overreaction can pour fuel on the fire. From a reputation standpoint, the school’s goal is to respond in a way that protects students, faculty and staff, maintains trust and transparency, and keeps the school community moving forward with as little disruption as possible.

The stakes couldn’t be clearer. Even if a deepfake or synthetic audio clip is later proven false, the initial shock probably caused real damage. Students and parents may feel unsafe or uninformed. Faculty may worry about leadership’s readiness. Alumni and donors may quietly back away. Disinformation often doesn’t invent problems out of thin air—it shines a harsh light on existing tensions and past shortcomings.

Hindsight has 20/20 Vision

We can learn from real-world failures. One cautionary tale is Lancaster Country Day School in Pennsylvania. In late 2023, the school was tipped off to AI-generated nude images of female students. However, their internal investigation appeared to go nowhere—it wasn’t clear whether the school involved law enforcement or state authorities at the time of the incident. Several months later after images were repeatedly shared among students, parents discovered the images circulating online -- prompting a police investigation that revealed more than 60 victims, many of whom were minors. Leadership resigned, classes were canceled, and students walked out in protest. The harm wasn’t just in the crime itself but in how the school mishandled it months later: slow, silent, confusing, and dismissive when the community most needed clarity and care. In the end, two of the school’s students were formally charged in juvenile court with creating and disturbing the images.

That kind of crisis teaches a hard lesson: speed and transparency protect credibility. Misinformation and disinformation are rewarded by algorithms and spread faster online than ever. Even a few hours of silence can let rumors harden into generally accepted truths, and even poorly created deepfakes still have the power to seriously harm reputations.

Remember: you don’t have to have all the answers immediately. This is where bridge statements are so valuable: a clear early acknowledgment that you’re aware of the issue, that you’re investigating, and that you’ll keep people informed, e.g., “We don’t have all the facts yet, but here’s what we’re doing to gather information and respond.”

Such statements show leadership without overpromising. They buy time, steady the community, and help the school control the narrative while hopefully blocking speculation and disinformation from taking root.

Preparation is Essential

Your school likely has a crisis plan in place. Has it been updated to include scenarios for digital deception and disinformation? Are there draft templates ready for common scenarios—fake videos of staff, doctored voicemails targeting students, deepfakes of students created by students from another school, even manipulated images or advertisements representing your brand? You’ll never use the templates word for word, but in a crisis, they give you a helpful head start and help ensure consistent, thoughtful messaging.

Another important consideration: not every stakeholder group needs the same message. Parents want reassurance and next steps. Students need empathy and safety-focused language. Faculty want to feel supported. Alumni and donors will expect transparency and evidence of strong leadership. Media will look for clear facts and credible sources. Each channel requires its own tone and level of detail, whether that’s a short social post or a formal press release.

Social media monitoring is crucial during these moments. It’s not just about tracking keywords or hashtags tied to your school’s name. It’s about spotting spikes in emotion or misinformation early so you can correct the record before rumors spread unchecked. And when you do engage online, not every comment deserves a reply. Ignore jabs by provocateurs. But genuine questions or concerns deserve clear, respectful answers that demonstrate authority and care.

Finally, when the crisis passes, the work isn’t over. Schools should review their response, identify gaps, update policies, and run tabletop exercises to strengthen their readiness. This isn’t about assigning blame. It’s about being even better prepared next time.

In the end, responding well to AI-generated disinformation is not about achieving perfection. It’s about showing your community that, even in the face of uncertainty, your leadership remains in control, transparent, and focused on what matters most: the safety and well-being of students, and the preservation of the trust that holds your school community together.

 

Next
Next

When to Speak Out—and When to Stay Silent: A Strategic Framework for Institutional Communications