Command Palette

Search for a command to run...

PodMine
"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis
"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis•December 18, 2025

AI 2025 → 2026 Live Show | Part 1

A year-end live show featuring nine rapid-fire conversations exploring AI's landscape in 2025-2026, with discussions ranging from AI safety and technological unemployment to scientific research, continual learning architectures, and the evolving capabilities of frontier AI models.
AI & Machine Learning
Indie Hackers & SaaS Builders
Tech Policy & Ethics
Developer Culture
Web3 & Crypto
Zvi Moshowitz
Greg
Eugenia Kuyda

Summary Sections

  • Podcast Summary
  • Speakers
  • Key Takeaways
  • Statistics & Facts
  • Compelling StoriesPremium
  • Thought-Provoking QuotesPremium
  • Strategies & FrameworksPremium
  • Similar StrategiesPlus
  • Additional ContextPremium
  • Key Takeaways TablePlus
  • Critical AnalysisPlus
  • Books & Articles MentionedPlus
  • Products, Tools & Software MentionedPlus
0:00/0:00

Timestamps are as accurate as they can be but may be slightly off. We encourage you to listen to the full context.

0:00/0:00

Podcast Summary

This experimental live show features nine rapid-fire conversations that provide a comprehensive year-end retrospective on AI's trajectory and what might define 2026. (00:31) The unique format aims to deliver maximum information density with 20-minute segments instead of traditional 90-minute deep dives, covering everything from the race between frontier labs to breakthrough research in continual learning and the explosion of AI companions.

  • Main Theme: The episode captures AI's transition moment between eras, examining both the denialism gap in public discourse and the technical breakthroughs that suggest we're approaching transformative capabilities across multiple domains.

Speakers

Zvi Moshowitz

A prolific blogger and analyst who provides canonical assessments of new model launches and strategic landscape analysis at a remarkable pace. He offers regular commentary on AI developments and has become a go-to voice for understanding the competitive dynamics between major AI labs.

Greg (ARC-AGI Prize Lead)

Leads the ARC AGI Prize, a benchmark focused on measuring AI systems' ability to learn new things efficiently like humans do. The benchmark has become a key indicator watched by the AI community alongside other major performance metrics for new model releases.

Eugenia Kuyda

Former CEO and founder of Replica, one of the pioneering AI companion platforms with tens of millions of users. She now leads Wabi, described as a "YouTube for apps" where users can create personalized applications through natural language without seeing code.

Ali Behrouz

PhD student in computer science at Cornell University and research intern at Google. He's authored three landmark papers this year on memory and continual learning: Titans, Atlas, and Nested Learning, which Google teams are reportedly very excited about.

Logan Kirkpatrick

Senior product manager at Google DeepMind who leads AI Studio and the Gemini API. He shapes how Google works with developers and programmers, making him instrumental in the broader developer ecosystem's interaction with Google's AI capabilities.

Jungwon Hwang

Co-founder and CEO of Elicit, an AI-powered research assistant spun out of a nonprofit lab. Elicit helps researchers find and synthesize evidence faster, working primarily with pharmaceutical companies and serving some of the smartest users in the AI space.

Key Takeaways

The Denialism Gap Reflects Economic Incentives, Not Technical Reality

Zvi Moshowitz explains that widespread AI denialism exists because "it is very hard to make a man understand something when his salary depends on not understanding it, and misinformation is demand driven, not supply driven." (02:34) People need to believe AI is normal technology for their own cognitive peace, business plans, and narratives. The continued viral spread of posts claiming "AGI is impossible" demonstrates how people grasp onto any argument that lets them maintain the story that AI will never fundamentally change things. This creates a dangerous disconnect where policy makers and business leaders may be making decisions based on outdated assumptions about AI capabilities and timelines.

Augmentation Precedes Automation in a Continuous Process

Rather than a sudden transition from human work to full automation, we're seeing a gradual progression where AI augmentation slowly becomes automation. (09:56) As Zvi describes, you start by having AI help with parts of tasks while checking its work, then gradually check less and automate more, until eventually "at some point you realize, oh, I can just press a button, and it does an hour of work, and then it becomes two hours of work... and then it becomes, oh, my entire job." This pattern is already visible in coding where top AI practitioners report 2-3x productivity multipliers, while amateur programmers see 10-100x improvements in their ability to accomplish technical tasks.

Human-Level Sample Efficiency Remains the Key Intelligence Benchmark

The ARC AGI benchmark focuses on what Greg identifies as the core difference between human and artificial intelligence: sample efficiency in learning new concepts. (26:58) While AI can learn any scoped domain given enough data, humans can learn new patterns from just 2-3 examples. The benchmark teaches something new at question time and tests whether the system learned it, with humans achieving 100% solvability on tasks while frontier models have progressed from 20-40% to 89% accuracy over one year. This progression toward human-level sample efficiency may be one of the clearest indicators of approaching AGI.

AI Companion Engagement Requires Human Flourishing Metrics

The AI companion space has split into fan fiction-style character interaction (popular with teenagers) and genuine companionship (preferred by adults 25+), but both face the critical challenge of engagement maximization versus human wellbeing. (52:13) Eugenia Kuyda advocates for adopting "human flourishing" as the primary metric instead of engagement time, noting that ChatGPT's responses are structured to always end with suggestions for continuing conversation. In contrast, Claude sometimes pushes back or even ends conversations when it believes continued interaction isn't beneficial, though this can sometimes come across as overly harsh.

Nested Learning Enables Multilevel Memory Architecture

Ali Behrouz's nested learning research introduces different frequencies of update across memory levels, similar to how humans have working memory for immediate contexts while preserving core beliefs and identity over longer timescales. (68:20) This approach stacks the learning process itself, creating hierarchical abstractions from data rather than just hierarchical features. The architecture enables models to dramatically adapt to particular contexts while preserving knowledge needed for future tasks, potentially solving the continual learning challenge that current transformers face through their limited in-context learning abilities.

Statistics & Facts

  1. ARC AGI performance improved 390 times in cost efficiency over one year, with GPT-5.2 achieving 89% accuracy at dramatically lower token costs than the initial 87% score that cost approximately $1,000 per task. (25:56)
  2. Character AI and similar platforms report engagement levels as high as 90 minutes per day per user, with three quarters of teens having used AI companions and more than half using them regularly. (49:02)
  3. On OpenAI's new Frontier Science benchmark, models achieve 70% accuracy on Olympiad-level questions but only 25-30% accuracy on PhD-level research tasks that real scientists are working on. (107:16)

Compelling Stories

Available with a Premium subscription

Thought-Provoking Quotes

Available with a Premium subscription

Strategies & Frameworks

Available with a Premium subscription

Similar Strategies

Available with a Plus subscription

Additional Context

Available with a Premium subscription

Key Takeaways Table

Available with a Plus subscription

Critical Analysis

Available with a Plus subscription

Books & Articles Mentioned

Available with a Plus subscription

Products, Tools & Software Mentioned

Available with a Plus subscription

More episodes like this

In Good Company with Nicolai Tangen
January 14, 2026

Figma CEO: From Idea to IPO, Design at Scale and AI’s Impact on Creativity

In Good Company with Nicolai Tangen
Uncensored CMO
January 14, 2026

Rory Sutherland on why luck beats logic in marketing

Uncensored CMO
We Study Billionaires - The Investor’s Podcast Network
January 14, 2026

BTC257: Bitcoin Mastermind Q1 2026 w/ Jeff Ross, Joe Carlasare, and American HODL (Bitcoin Podcast)

We Study Billionaires - The Investor’s Podcast Network
This Week in Startups
January 13, 2026

How to Make Billions from Exposing Fraud | E2234

This Week in Startups
Swipe to navigate