Search for a command to run...

Timestamps are as accurate as they can be but may be slightly off. We encourage you to listen to the full context.
Anthropic Product Head Mike Krieger joins the Big Technology Podcast to discuss how AI model development is accelerating and what to expect as the pace continues to intensify. (00:48) Krieger, who co-founded Instagram before joining Anthropic, explains how the company released Claude Sonnet 4.5 just months after the Claude 4 series, marking a significant acceleration in model releases. (00:54) He attributes this speed to improved customer feedback loops, streamlined operational processes, and enhanced engineering capabilities at scale. (02:02)
Mike Krieger is the Product Head at Anthropic and co-founder of Instagram. He helped build Instagram from a 13-person team to a billion-dollar acquisition by Facebook, demonstrating exceptional ability to scale products with small teams. (25:44) At Anthropic, he leads product development for Claude AI models and has been instrumental in the company's rapid model iteration and enterprise adoption strategies.
Alex Kantrowitz is the host of Big Technology Podcast and runs a newsletter and website focused on nuanced conversations about the tech world. He has extensive experience in social media reporting, having worked at BuzzFeed covering the emergence of new social platforms and applications in the 2010s.
Anthropic's faster model releases stem from working more closely with end users and customers, creating rapid feedback cycles that identify specific areas for improvement. (02:02) Krieger explains that customers push models in interesting ways, revealing problems to tackle in next versions. For example, Claude 4 was good at writing code but got sidetracked over longer time horizons, leading to a major emphasis on extended task execution in Sonnet 4.5. (02:43) This approach transforms model development from purely research-driven to customer-problem-driven, creating urgency around fixing what Krieger calls "almost like bugs" in model capabilities.
Streamlining model release processes has been crucial to Anthropic's acceleration, with significant improvements in early access feedback, customer communication, and rollout execution. (03:13) Krieger notes that a customer praised their latest rollout as "the smoothest I've seen" among AI lab model releases. This operational up-leveling means each release no longer feels like a "very bespoke, very difficult process" but follows a predictable, smooth framework that research teams can rely on.
Rather than just scaling up data centers, Anthropic's gains come from combining algorithmic improvements with enhanced engineering capabilities to maximize compute utilization. (05:21) Krieger explains that running large training runs reliably at scale requires solving complex engineering and machine learning problems. The improvements between Sonnet 4 and 4.5 came largely from engineering advances that enabled scaling up post-training work, demonstrating how algorithmic work and compute scaling are deeply interconnected.
Anthropic has evolved from using Claude as autocomplete to deploying it as proactive agents that participate directly in workplace operations. (07:47) The company built "Claude On Call" using their Agent SDK, where Claude shows up first in incident channels, analyzes potential problems, and answers questions while engineers work on other tasks. (08:31) This represents a shift from reactive AI assistance to proactive collaboration, where Claude acts more like a coworker than a tool.
Anthropic trained memory capabilities directly into Claude rather than building them as external systems, allowing the model to understand and manage its own memory. (27:19) This means Claude can update its memory about users, retrieve relevant past interactions, and learn task preferences over time. (27:30) Krieger envisions Claude becoming like "a very competent new hire" that improves through use, remembering user preferences like podcast description formats and applying them automatically in future interactions. (29:22)