Loading Events
NOV13

How Large Language Models Work And What That Means for Courts

Event Details

November 13 | 12:00 pm

AI is no longer a distant issue for the judiciary—it is already in the courtroom. Lawyers are filing AI-generated briefs, vendors are marketing “AI for judges,” and fabricated case citations have appeared in both filings and opinions. This CLE cuts through the hype to explain how today’s large language models (LLMs) actually work, why they sometimes make things up, and how courts can harness them responsibly.

Drawing on current examples—including Mata v. Avianca (S.D.N.Y. 2023) and the remarkable story of Cassandra White, a pro se tenant who overturned her own eviction using ChatGPT—the program explores both the dangers and opportunities AI presents to judicial administration. Participants will learn what makes LLMs powerful, where they fail, and what architectural safeguards are required before any AI system should ever assist a court.

Shlomo Klapper, developer of the Learned Hand judicial AI platform, offers an insider’s, non-promotional perspective on what it takes to build technology that upholds—rather than undermines—the judiciary’s core mission of justice, fairness, and efficiency. The session introduces a practical five-question framework for evaluating AI proposals, explains the concept of “hallucination,” and demonstrates the four essential design constraints of trustworthy judicial AI: Evidence-Linking, Procedural Grounding, Auditability, and Multi-Agent Cross-Check.

Speaker: Shlomo Klapper, CEO/Founder @ Learned Hand; J.D., Yale Law School

Moderator: Oliver Roberts, Co-Director, WashU Law AI Collaborative

Hosted by:
Washington University School of Law — AI Collaborative “AI Policy Series”

Eligible for FREE CLE in Missouri, courtesy of WashU Law.

Not eligible for CLE in other jurisdictions, but all are welcome to attend free for non-CLE credit.