Search for a command to run...

Timestamps are as accurate as they can be but may be slightly off. We encourage you to listen to the full context.
In this Decoder episode, host Nilay Patel interviews Sean Fitzpatrick, CEO of LexisNexis, exploring how one of the legal profession's most foundational companies is transforming from a legal research database into an AI-powered drafting and analysis platform. (01:37) The conversation reveals LexisNexis's evolution from simply being "the library" where lawyers looked up case law to becoming what Sean describes as "an AI powered provider of information and analytics and drafting solutions." (05:37)
Sean Fitzpatrick serves as CEO of LexisNexis for North America, UK, and Ireland, reporting to Mike Walsh who leads the Legal and Professional division of parent company Relx. Under his leadership, LexisNexis has transformed from a traditional legal research database into an AI-powered platform offering drafting solutions and analytics. He oversees the company's fastest-growing product ever with their AI tool Protege, which launched in 2023 as part of Lexis Plus AI.
Nilay Patel is the Editor-in-Chief of The Verge and host of the Decoder podcast. A self-described "failed lawyer" who attended law school in the early 2000s, Patel brings a unique perspective to technology conversations, particularly around legal tech and AI. His background in both law and technology journalism allows him to probe the intersection of these fields with particular insight into how AI might reshape fundamental legal practices.
Sean emphasizes that consumer-grade AI models like ChatGPT are fundamentally inadequate for legal work because they can't meet the evidentiary standards required in courtrooms. (12:21) LexisNexis addresses this by grounding their AI in 160 billion curated legal documents and implementing a "citator agent" that verifies cases actually exist and remain good law. The key insight here is that legal AI requires authoritative content, constant updates, transparency, and strict privacy protections that consumer tools simply cannot provide. This highlights how professional AI applications demand entirely different architectures than general-purpose models.
The conversation reveals a concerning trend where AI automation is eliminating the foundational learning experiences that traditionally trained junior lawyers. (17:42) Sean acknowledges this challenge, noting how associates historically learned by doing detailed research and document review - tasks now being automated away. One example shared was an associate who became the firm's expert on asset securitization across all 50 states through hands-on work, a learning path that may no longer exist. This represents a broader challenge across knowledge work: how do we maintain expertise when AI automates the entry-level work that builds that expertise?
LexisNexis discovered they needed to hire significantly more attorneys than expected to review AI outputs, with Sean stating this was one of the most surprising aspects of their AI development. (46:15) These attorney reviewers are matched to specific practice areas - M&A specialists review M&A documents, ensuring domain expertise informs the AI's training. This approach reveals that successful AI deployment in professional contexts requires substantial human expertise investment, not just technical development. The "secret sauce" isn't just the technology, but the army of professionals ensuring quality and accuracy.
LexisNexis employs an agentic AI approach where different models handle different aspects of legal work - OpenAI's o3 for deep research, Claude 3 Opus for drafting documents. (44:01) A planning agent receives queries and allocates tasks to the most appropriate specialized agents and models. This architecture allows the platform to leverage each model's strengths while providing a seamless user experience. The practical implication is that the future of professional AI likely involves orchestrated systems rather than single, monolithic models.
Unlike consumer AI tools that operate as "black boxes," LexisNexis opens up their system's logic, allowing attorneys to see and modify the reasoning process. (59:54) This transparency enables lawyers to understand how conclusions were reached and make adjustments when the AI gets something wrong. Sean emphasizes this as a core principle of their responsible AI development, alongside human oversight and bias prevention. For professional AI applications, explainability isn't just nice-to-have - it's essential for maintaining professional standards and accountability.