Search for a command to run...

Timestamps are as accurate as they can be but may be slightly off. We encourage you to listen to the full context.
This episode of This Week in Startups features two fascinating conversations about the intersection of AI and business. Host Alex Wilhelm first interviews Gabe Pereyra, co-founder and president of Harvey AI, exploring why the legal profession has become a hotspot for AI innovation. (02:38) Harvey has achieved remarkable growth, scaling from $50 million to $100 million ARR in just eight months while serving over 500 law firms. (18:37) The second half features Alex Atallah, co-founder and CEO of OpenRouter, discussing how his company provides unified API access to hundreds of AI models from dozens of providers. (30:41)
Co-founder and president of Harvey AI, one of the fastest-growing legal AI companies with an $8 billion valuation. Harvey recently raised $150 million from Andreessen Horowitz and has achieved remarkable scale, reaching $100 million ARR while serving over 500 law firms globally. Despite the company's massive success, Pereyra maintains a humble approach, still sleeping on a mattress on the floor.
Co-founder and CEO of OpenRouter, a unified API platform that provides access to hundreds of AI models from dozens of providers. OpenRouter has raised $40 million across seed and Series A rounds, with Andreessen Horowitz leading the seed and Menlo leading the Series A. The platform processes over 5.7 trillion tokens weekly and serves as a crucial distribution layer for AI model discovery and usage.
Gabe Pereyra explains that legal work shares striking similarities with coding - both require understanding specialized language and working with vast corpuses of information. (06:54) Legal professionals, particularly junior associates, spend enormous amounts of time synthesizing massive amounts of information across different domains. They might need to understand pharmaceutical supply chains while researching historical litigation, then connect that to discovery documents and case law. This complex workflow across multiple data sources makes AI particularly valuable for legal professionals who can now use web search, legal databases like Lexis, and document analysis tools in integrated workflows.
A fascinating challenge emerges when AI works too quickly for traditional legal billing. (21:44) Pereyra notes that while there's inherent tension in the billable hour model when efficiency increases, smart law firms are finding ways to win. Some firms are moving toward fixed-fee arrangements or taking efficiency gains to serve more clients at scale. The key insight is that AI will likely enable law firms to "decouple revenue from headcount" and achieve software-like margins, potentially creating law firms 10 times larger than current ones.
Harvey's approach to enterprise clients reveals critical requirements for AI adoption in regulated industries. (15:16) The company must be completely "eyes off" regarding client data, never training on any client information. Law firms face additional complexity because their data actually belongs to multiple different clients, requiring sophisticated partitioning and governance systems. This creates opportunities for specialized training only when both the law firm and specific client consent to share data for model improvement.
Alex Atallah emphasizes that new AI models are released every 2-3 days, each with unique strengths for different tasks. (32:58) Specialized models trained for "resourcefulness" rather than deep world knowledge are emerging for long-running background tasks. OpenRouter's data shows that different models excel in different areas - for example, GrokCodeFast became popular because it's cheap and fast, while other models excel at tool calling or calendar management. This diversity means companies need infrastructure to easily test and switch between models rather than locking into a single provider.
Traditional static benchmarks become worthless quickly, according to Atallah. (52:23) OpenRouter is building dynamic benchmarks that constantly update based on real user preferences and performance data. They track model accuracy in tool calling, user preferences, and actual usage patterns to create "high signal" recommendations. This approach treats benchmarking as a product itself, helping users discover which models work best for their specific use cases rather than relying on outdated static tests.