Search for a command to run...

Timestamps are as accurate as they can be but may be slightly off. We encourage you to listen to the full context.
This experimental live show features nine rapid-fire conversations that provide a comprehensive year-end retrospective on AI's trajectory and what might define 2026. (00:31) The unique format aims to deliver maximum information density with 20-minute segments instead of traditional 90-minute deep dives, covering everything from the race between frontier labs to breakthrough research in continual learning and the explosion of AI companions.
A prolific blogger and analyst who provides canonical assessments of new model launches and strategic landscape analysis at a remarkable pace. He offers regular commentary on AI developments and has become a go-to voice for understanding the competitive dynamics between major AI labs.
Leads the ARC AGI Prize, a benchmark focused on measuring AI systems' ability to learn new things efficiently like humans do. The benchmark has become a key indicator watched by the AI community alongside other major performance metrics for new model releases.
Former CEO and founder of Replica, one of the pioneering AI companion platforms with tens of millions of users. She now leads Wabi, described as a "YouTube for apps" where users can create personalized applications through natural language without seeing code.
PhD student in computer science at Cornell University and research intern at Google. He's authored three landmark papers this year on memory and continual learning: Titans, Atlas, and Nested Learning, which Google teams are reportedly very excited about.
Senior product manager at Google DeepMind who leads AI Studio and the Gemini API. He shapes how Google works with developers and programmers, making him instrumental in the broader developer ecosystem's interaction with Google's AI capabilities.
Co-founder and CEO of Elicit, an AI-powered research assistant spun out of a nonprofit lab. Elicit helps researchers find and synthesize evidence faster, working primarily with pharmaceutical companies and serving some of the smartest users in the AI space.
Zvi Moshowitz explains that widespread AI denialism exists because "it is very hard to make a man understand something when his salary depends on not understanding it, and misinformation is demand driven, not supply driven." (02:34) People need to believe AI is normal technology for their own cognitive peace, business plans, and narratives. The continued viral spread of posts claiming "AGI is impossible" demonstrates how people grasp onto any argument that lets them maintain the story that AI will never fundamentally change things. This creates a dangerous disconnect where policy makers and business leaders may be making decisions based on outdated assumptions about AI capabilities and timelines.
Rather than a sudden transition from human work to full automation, we're seeing a gradual progression where AI augmentation slowly becomes automation. (09:56) As Zvi describes, you start by having AI help with parts of tasks while checking its work, then gradually check less and automate more, until eventually "at some point you realize, oh, I can just press a button, and it does an hour of work, and then it becomes two hours of work... and then it becomes, oh, my entire job." This pattern is already visible in coding where top AI practitioners report 2-3x productivity multipliers, while amateur programmers see 10-100x improvements in their ability to accomplish technical tasks.
The ARC AGI benchmark focuses on what Greg identifies as the core difference between human and artificial intelligence: sample efficiency in learning new concepts. (26:58) While AI can learn any scoped domain given enough data, humans can learn new patterns from just 2-3 examples. The benchmark teaches something new at question time and tests whether the system learned it, with humans achieving 100% solvability on tasks while frontier models have progressed from 20-40% to 89% accuracy over one year. This progression toward human-level sample efficiency may be one of the clearest indicators of approaching AGI.
The AI companion space has split into fan fiction-style character interaction (popular with teenagers) and genuine companionship (preferred by adults 25+), but both face the critical challenge of engagement maximization versus human wellbeing. (52:13) Eugenia Kuyda advocates for adopting "human flourishing" as the primary metric instead of engagement time, noting that ChatGPT's responses are structured to always end with suggestions for continuing conversation. In contrast, Claude sometimes pushes back or even ends conversations when it believes continued interaction isn't beneficial, though this can sometimes come across as overly harsh.
Ali Behrouz's nested learning research introduces different frequencies of update across memory levels, similar to how humans have working memory for immediate contexts while preserving core beliefs and identity over longer timescales. (68:20) This approach stacks the learning process itself, creating hierarchical abstractions from data rather than just hierarchical features. The architecture enables models to dramatically adapt to particular contexts while preserving knowledge needed for future tasks, potentially solving the continual learning challenge that current transformers face through their limited in-context learning abilities.