Digital Vetting in Public Safety: Preventing Avoidable Hiring Failures

A Missouri officer was fired two days after being sworn in. His racist posts were public—digital vetting catches them first.
University of Maryland Student Project

What Is Digital Vetting in Public Safety?

In August 2023, the Pleasant Hill Police Department in Missouri was forced to fire police officer Jacob Smith just two days after he was sworn in, following the discovery of a racist and homophobic social media post he had made.

The incident occurred because the department failed to conduct a thorough social media background check before hiring him. Once citizens uncovered the discriminatory post and brought it forward, an internal investigation began. Police Chief Tommy Wright described the post as "offensive" and not reflective of the department's values.

Smith was placed on paid leave and later terminated. The City of Pleasant Hill publicly acknowledged the failure in their hiring process and has since added safeguards to prevent similar mistakes.

Why Public Safety Roles Need Stronger Digital Screening

This situation raises important questions about whether traditional background checks are enough, and whether breakdowns like this stem from human error, technical gaps, or both. It also prompts a larger conversation about the validity and security of digital vetting, something every responsible technology user should consider.

A similar reminder of how quickly public behavior can undermine trust appeared in Montgomery County, where two firefighters went viral for flooding a baseball field out of frustration. Incidents like this show how easily online and offline conduct can damage confidence in public-safety roles.

At the same time, agencies face a difficult balance. They must improve screening without crossing into mass surveillance or overreach. Headlines often focus on extremes, making it easy to assume all screening technologies work the same way.

In reality, digital vetting falls along a spectrum:

• When misused, social media analysis can violate privacy, suppress free speech, and amplify bias

• When used ethically, digital behavior screening helps agencies make informed decisions

• Responsible tools focus on public behavior, not personal beliefs

• Clear guardrails are especially critical in public-trust roles like law enforcement, fire/EMS, and federal agencies such as the FBI, DHS, and FDA

Clearly, something must bridge the gap between doing nothing and doing too much.

Where Ferretly Fits In

Ferretly isn't built to watch everyone, and it doesn't try to predict the future. It's a targeted vetting tool that reviews only public digital content and only when a person has been selected for screening. There's no constant monitoring, no sweeping data collection, and no hidden tracking. Ferretly focuses on what someone does, not what they believe.

A review only occurs when an agency has a clear, documented reason to look. Everything in Ferretly's system operates with purpose and permission, meaning every search is traceable and transparent.

How Digital Vetting Works in Practice

Imagine a new officer being hired at your local police department. Instead of assuming a traditional background check will catch every concerning behavior, a one-time digital screening reviews the candidate's public social media. If something inappropriate appears, it's addressed before the person is given the job. Now picture a federal agency screening an applicant for a high-level clearance. They aren't watching this person daily, they're checking for public risk signals that could put others in danger.

Ferretly also emphasizes behavior over beliefs. Every flagged item comes with a link, timestamp, and full context so agencies know exactly where the information came from. If the content involves threats or criminal behavior, it is escalated to a human reviewer, not a black-box algorithm. Agencies can also adjust their own risk settings to determine what matters and what doesn't.

In all, Ferretly fills the gap between doing nothing and doing too much. It gives agencies a balanced way to catch real risks without crossing into surveillance.

If a system like this had been used in Pleasant Hill, Jacob Smith's public posts would have surfaced long before he was sworn in, and the department wouldn't have been blindsided or had its integrity questioned. Digital vetting isn't about punishing applicants; it's about preventing avoidable failures before they become headlines.

###

About This Article

This piece was developed as part of a University of Maryland writing practicum exploring AI ethics, responsible AI-assisted content creation, and advanced prompting techniques. The course was led by Adam Lloyd, Ph.D., with industry mentorship provided by Ferretly to ground coursework in real-world application and ethical AI use.

Student Author: Rhea Mammen
rmammen@terpmail.umd.edu · LinkedIn

Course Faculty & Mentorship
Adam Lloyd, Ph.D.
· Senior Lecturer,University of Maryland
Adam teaches business and technical writing with a focus on real-world application—his courses partner with companies to create actual workplace deliverables. He co-created UMD's "Digital Rhetoric at the Dawn ofExtra-Human Discourse," exploring AI's role in academic, creative, and professional writing. A former journalist, startup founder, and award-honored educator, he holds advanced degrees in English, philosophy, and national security studies.
lloyda@umd.edu · LinkedIn

Nicole Young · VP, Growth Marketing
Nicole provides industry mentorship for this course, bringing deep experience in growth marketing, advertising strategy, and AI-integrated content systems. Her work focuses on building ethical, scalable marketing programs at the intersection of technology, trust, and brand performance. She welcomes collaboration with academic programs seeking practitioner partnerships.
nicole@ferretly.com · LinkedIn

Vous souhaitez consulter un exemple de reportage sur les réseaux sociaux ?

Planifiez une démonstration gratuite