Search for a command to run...

Timestamps are as accurate as they can be but may be slightly off. We encourage you to listen to the full context.
In this episode, Alexander Embiricos, Product Lead for Codex at OpenAI, shares insights into building one of the most successful AI coding agents. (15:00) He reveals how Codex has grown 20x since August and discusses OpenAI's unique product development philosophy of shipping quickly and iterating based on user feedback. The conversation covers the explosive growth of Codex from serving millions to trillions of tokens weekly, the remarkable 18-day development timeline for the Sora Android app that became the #1 app in the App Store, and the vision of AI agents as proactive teammates rather than reactive tools.
Alexander leads product on Codex, OpenAI's powerful coding agent, which has grown 20x since August and now serves trillions of tokens weekly. Before joining OpenAI, he spent five years building a pair programming product for engineers and previously worked as a product manager at Dropbox. He now works at the frontier of AI-led software development, building what he describes as a software engineering teammate—an AI agent designed to participate across the entire development lifecycle.
Lenny is the host of Lenny's Podcast and author of Lenny's Newsletter, one of the most popular product management newsletters in the industry. He previously worked as a product manager at Airbnb and has become a leading voice in product management and startup growth strategies.
Unlike other coding tools where you might start with simple tasks, Codex is designed for professional use on complex problems. (64:17) Alexander recommends giving Codex your hardest coding challenges—debugging gnarly bugs or implementing complex features in large codebases. This approach helps users quickly understand Codex's true capabilities and builds trust through solving meaningful problems. The tool excels when treating it like a smart intern who needs context but can handle sophisticated tasks once properly guided.
The most effective way to work with Codex mirrors how you'd onboard a new teammate. (67:55) Start by having it understand your codebase, then collaborate on formulating plans, and gradually build up to more complex tasks. This trust-building process helps users learn effective prompting techniques while establishing boundaries for what the AI can and cannot do reliably. The key is treating it as a collaborative partner rather than a magic solution.
The biggest bottleneck to AGI-level productivity isn't model capability—it's human typing speed and review processes. (71:19) Alexander identifies that while AI can generate code quickly, humans become the constraint in validating and reviewing that work. The most impactful improvements come from optimizing these human-AI feedback loops, making code review more efficient, and enabling AI to validate its own work before human review.
As coding agents become more capable, the role of engineers is evolving from code writers to code reviewers and system architects. (34:22) While writing code is often the fun part of engineering, reviewing AI-generated code can be less engaging. Smart teams are building systems where AI can validate its own work and provide confidence indicators to humans, making the review process more efficient and enjoyable.
Coding is becoming a core competency for any AI agent, as writing code is the most effective way for models to use computers. (29:07) This means coding skills will be valuable across many non-engineering roles. PMs, designers, and other functions are already using Codex for prototyping, data analysis, and building tools. The future workplace will likely require basic coding literacy as AI makes programming more accessible to everyone.