Screen With Fairness: Can AI Make for a Fairer Jury?

AI helps attorneys identify juror biases through passive social media review—while staying within court ethical guidelines.
University of Maryland Student Project

One of the most difficultaspects of a court case is finding ideal jurors. Attorneys want to pick jurorsthat will align with their side. Lots of hard work goes into analyzingpotential jurors to uncover all of their biases and traits that might influencetheir verdict.

Now, with AI-assisted socialmedia screening, the jury selection process can be more efficient, but thequestion is whether it violates ethical concerns.

Important Note: Ferretlyconducts screening solely through passive review of publicly available accountsand never makes any direct or indirect contact with those it is reviewing.

How Does AI Screening Work?

AI searches through anindividual's public social media presence and identifies posts with certainsignals that point towards unwanted behaviors or biases. The AI identifiesthese signals through both textual and image analysis. Then, a human reviewswhat the AI has found to verify its results. Those results are then sent to theclient.

Human interpretation of AIoutput is crucial to maintaining a balance between machines and humans in thework that is being done.

Ferretly follows this sameprocess, combining both textual and image analysis with human oversight toensure accuracy and appropriateness.

How to Use AI for Jury Screening Responsibly

How to Identify Biases

Court cases often addresspolitically-charged controversies, such as substance abuse, views on lawenforcement, and the class divide. With Ferretly, a client can select whatkeywords or signals they want the AI to be searching for. Jury selection teamscan use this process to choose biases that apply to the case.

For instance, a prosecutor on adrug-related charge would want to know a potential juror's thoughts aboutdrug-use. The prosecutor could select to prioritize that specific signal so theAI could search through and find a post where the potential juror promotesdrug-use.

Teams must carefully selecttheir target signal and verify the AI's results to ensure that the processfollows the standards set by the courts. Even AI can make mistakes soeverything should be double-checked.

Who to Keep on Jury

Using these verified insights,legal professionals can compile juror-specific profiles to make the finaldecision.

Below is a fictional example ofwhat a juror profile based on AI social media screening may look like.

Juror #10 appears to be intheir early 30s and works in a customer-facing role in the tech sector. Theirpublic social media activity is mostly centered around family, hobbies, andpolitical ideas.

• Substance Use: Postswith photos of illicit substances indicate a permissive view on substance use.

• Law Enforcement: Postsdenouncing the local police indicate a negative sentiment towards lawenforcement.

• Corporate Views: Postssharing articles bashing large corporations indicates a negative sentimenttowards corporations.

This example reflectshuman-verified interpretations of signals identified by AI. AI putting togetherthe profiles itself would be an overreach of what its role should be in thejury selection process. The more that humans are involved decreases the chancefor courts or opposing parties to have a problem with the process.

AI is just a tool to empowerattorneys who make the final decision.

How to Appeal to Jurors

Jury selection teams canutilize the profiles they created beforehand in order to adapt their argumentsin a way that best appeals to the individual jurors. The arguments used incourt should be tailored to the results of the AI social media screening process.

Is Using AI for Jury Selection Fair?

Can AI Eliminate Biases

With people doing all of thesocial media screening, there is a high likelihood that the people's internalbiases may affect the outcome of their research. With AI, there is still thatlikelihood of bias. AI can take in biases that exist in the datasets that wereused in training the AI.

However, Ferretly's designfocuses on eliminating any biases present. We safeguard against this problemthrough staying transparent about signal definitions, using continuouslyreviewed, unbiased datasets, and always including human oversight on AI activities.

Does AI Fit the Court'sLimits on Juror Research

In jury selection, the processof both sides competing to pick jurors that would favor their side over theother is what establishes the fairness of the jury. The rules that the courtsset up are to ensure that both sides have equal access to information and thatthe potential jurors' rights are not violated.

One of the rules that thecourts have imposed is strict regulations on research into potential jurors'social media. The core principle being that the research must be passivelyconducted on public accounts.

Passive observation means thatnobody on the jury selection team can reach out to the jurors through theirsocial media. The research can only be conducted through viewing their accountswith no direct or indirect contact.

The research must be conductedsolely on jurors' public accounts so no research can be done on any privateaccounts that jurors may have. This ties into the rule on passive observationbecause in order to access a private account, a person must contact the ownerof the account to get permission.

At Ferretly, our AI followsthis core principle. The AI makes no contact with the people whose social mediait is analyzing and the AI only looks at publicly available accounts.

As AI continues to evolve andgrow in popularity, the courts might create specific rules for how it can beused. Ferretly can serve as a good model for how AI can follow ethicalstandards while still supplying efficient and useful insights for jury selectionteams.

 ###

About This Article

This piece was developed as part of a University of Maryland writing practicum exploring AI ethics, responsible AI-assisted content creation, and advanced prompting techniques. The course was led by Adam Lloyd, Ph.D., with industry mentorship provided by Ferretly to ground coursework in real-world application and ethical AI use.

Student Author: Nina Mills
gmills1@terpmail.umd.edu · LinkedIn

Course Faculty & Mentorship
Adam Lloyd, Ph.D. ·
Lecturer, University of Maryland
Adam teaches business and technical writing with a focus on real-world application—his courses partner with companies to create actual workplace deliverables. He co-created UMD's "Digital Rhetoric at the Dawn ofExtra-Human Discourse," exploring AI's role in academic, creative, and professional writing. A former journalist, startup founder, and award-honored educator, he holds advanced degrees in English, philosophy, and national security studies.
lloyda@umd.edu · LinkedIn

Nicole Young · VP, Growth Marketing
Nicole provides industry mentorship for this course, bringing deep experience in growth marketing, advertising strategy, and AI-integrated content systems. Her work focuses on building ethical, scalable marketing programs at the intersection of technology, trust, and brand performance. She welcomes collaboration with academic programs seeking practitioner partnerships.
nicole@ferretly.com · LinkedIn

Want to see a sample social media report?

Schedule free demonstration