Screen With Fairness: Can AI Make for a Fairer Jury?

AI helps attorneys identify juror biases through passive social media review—while staying within court ethical guidelines.
University of Maryland Student Project

One of the most difficult aspects of a court case is finding ideal jurors. Attorneys want to pick jurors that will align with their side. Lots of hard work goes into analyzing potential jurors to uncover all of their biases and traits that might influence their verdict.

Now, with AI-assisted social media screening, the jury selection process can be more efficient, but the question is whether it violates ethical concerns.

Important Note: Ferretly conducts screening solely through passive review of publicly available accounts and never makes any direct or indirect contact with those it is reviewing.

How Does AI Screening Work?

AI searches through an individual’s public social media presence and identifies posts with certain signals that point towards unwanted behaviors or biases. The AI identifies these signals through both textual and image analysis. Then, a human reviews what the AI has found to verify its results. Those results are then sent to the client.

Human interpretation of AI output is crucial to maintaining a balance between machines and humans in the work that is being done.

Ferretly follows this same process, combining both textual and image analysis with human oversight toensure accuracy and appropriateness.

How to Use AI for Jury Screening Responsibly

How to Identify Biases

Court cases often address politically-charged controversies, such as substance abuse, views on law enforcement, and the class divide. With Ferretly, a client can select what keywords or signals they want the AI to be searching for. Jury selection teams can use this process to choose biases that apply to the case.

For instance, a prosecutor on a drug-related charge would want to know a potential juror’s thoughts about drug-use. The prosecutor could select to prioritize that specific signal so the AI could search through and find a post where the potential juror promotes drug-use.

Teams must carefully select their target signal and verify the AI’s results to ensure that the process follows the standards set by the courts. Even AI can make mistakes so everything should be double-checked.

Who to Keep on Jury

Using these verified insights, legal professionals can compile juror-specific profiles to make the final decision.

Below is a fictional example of what a juror profile based on AI social media screening may look like.

Juror #10 appears to be intheir early 30s and works in a customer-facing role in the tech sector. Their public social media activity is mostly centered around family, hobbies, and political ideas.

• Substance Use: Posts with photos of illicit substances indicate a permissive view on substance use.

• Law Enforcement: Posts denouncing the local police indicate a negative sentiment towards law enforcement.

• Corporate Views: Posts sharing articles bashing large corporations indicates a negative sentiment towards corporations.

This example reflects human-verified interpretations of signals identified by AI. AI putting together the profiles itself would be an overreach of what its role should be in the jury selection process. The more that humans are involved decreases the chance for courts or opposing parties to have a problem with the process.

AI is just a tool to empower attorneys who make the final decision.

How to Appeal to Jurors

Jury selection teams can utilize the profiles they created beforehand in order to adapt their arguments in a way that best appeals to the individual jurors. The arguments used in court should be tailored to the results of the AI social media screening process.

Is Using AI for Jury Selection Fair?

Can AI Eliminate Biases

With people doing all of the social media screening, there is a high likelihood that the people’s internal biases may affect the outcome of their research. With AI, there is still that likelihood of bias. AI can take in biases that exist in the datasets that were used in training the AI.

However, Ferretly’s design focuses on eliminating any biases present. We safeguard against this problem through staying transparent about signal definitions, using continuously reviewed, unbiased datasets, and always including human oversight on AI activities.

Does AI Fit the Court’s Limits on Juror Research

In jury selection, the process of both sides competing to pick jurors that would favor their side over the other is what establishes the fairness of the jury. The rules that the courts set up are to ensure that both sides have equal access to information and that the potential jurors’ rights are not violated.

One of the rules that the courts have imposed is strict regulations on research into potential jurors’ social media. The core principle being that the research must be passively conducted on public accounts.

Passive observation means that nobody on the jury selection team can reach out to the jurors through their social media. The research can only be conducted through viewing their accounts with no direct or indirect contact.

The research must be conducted solely on jurors’ public accounts so no research can be done on any private accounts that jurors may have. This ties into the rule on passive observation because in order to access a private account, a person must contact the owner of the account to get permission.

At Ferretly, our AI follows this core principle. The AI makes no contact with the people whose social media it is analyzing and the AI only looks at publicly available accounts.

As AI continues to evolve andgrow in popularity, the courts might create specific rules for how it can beused. Ferretly can serve as a good model for how AI can follow ethical standards while still supplying efficient and useful insights for jury selection teams.

 ###

About This Article

This piece was developed as part of a University of Maryland writing practicum exploring AI ethics, responsible AI-assisted content creation, and advanced prompting techniques. The course was led by Adam Lloyd, Ph.D., with industry mentorship provided by Ferretly to ground coursework in real-world application and ethical AI use.

Student Author: Nina Mills
gmills1@terpmail.umd.edu · LinkedIn

Course Faculty & Mentorship
Adam Lloyd, Ph.D. ·
Lecturer, University of Maryland
Adam teaches business and technical writing with a focus on real-world application—his courses partner with companies to create actual workplace deliverables. He co-created UMD's "Digital Rhetoric at the Dawn ofExtra-Human Discourse," exploring AI's role in academic, creative, and professional writing. A former journalist, startup founder, and award-honored educator, he holds advanced degrees in English, philosophy, and national security studies.
lloyda@umd.edu · LinkedIn

Nicole Young · VP, Growth Marketing
Nicole provides industry mentorship for this course, bringing deep experience in growth marketing, advertising strategy, and AI-integrated content systems. Her work focuses on building ethical, scalable marketing programs at the intersection of technology, trust, and brand performance. She welcomes collaboration with academic programs seeking practitioner partnerships.
nicole@ferretly.com · LinkedIn

Möchten Sie einen Beispielbericht für soziale Medien sehen?

Kostenlose Vorführung vereinbaren