Command Palette

Search for a command to run...

PodMine
"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis
"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis•December 28, 2025

Controlling Tools or Aligning Creatures? Emmett Shear (Softmax) & Séb Krier (GDM), from a16z Show

Emmett Shear and Séb Krier explore the flaws in current AI alignment approaches, arguing for a more organic, process-oriented method that treats AI as potential beings with evolving goals and the capacity for care, rather than mere tools to be controlled.
AI & Machine Learning
Tech Policy & Ethics
Developer Culture
Sam Altman
Emmett Shear
Seb Krier
Eric Thornberg
OpenAI

Summary Sections

  • Podcast Summary
  • Speakers
  • Key Takeaways
  • Statistics & Facts
  • Compelling StoriesPremium
  • Thought-Provoking QuotesPremium
  • Strategies & FrameworksPremium
  • Similar StrategiesPlus
  • Additional ContextPremium
  • Key Takeaways TablePlus
  • Critical AnalysisPlus
  • Books & Articles MentionedPlus
  • Products, Tools & Software MentionedPlus
0:00/0:00

Timestamps are as accurate as they can be but may be slightly off. We encourage you to listen to the full context.

0:00/0:00

Podcast Summary

In this thought-provoking episode from The Cognitive Revolution, Emmett Shear (founder of Twitch, former interim CEO of OpenAI, and current founder of Softmax) debates with Seb Krier (frontier policy development lead at Google DeepMind) and host Eric Thornberg about the fundamental nature of AI alignment. (03:44) Shear argues that the current AI alignment paradigm, focused on controlling and steering AI behaviors, is fundamentally flawed and potentially dangerous as we approach AGI. He proposes "organic alignment" - treating alignment as an ongoing process similar to how humans maintain relationships and moral development over time. The conversation explores deep questions about AI consciousness, moral standing, and whether advanced AIs should be understood as tools to be controlled or beings deserving of care and respect.

• Main Theme: The debate between control-based alignment (treating AI as tools to be steered) versus organic alignment (treating AI as potential beings requiring mutual care and ongoing moral negotiation)

Speakers

Emmett Shear

Emmett Shear is the founder of Twitch, served as interim CEO of OpenAI during Sam Altman's brief firing, and is currently the founder of Softmax, a company focused on what he calls "organic alignment." He brings a unique perspective from his experience scaling large platforms and his brief but significant tenure leading one of the world's most prominent AI companies.

Seb Krier

Seb Krier serves as the frontier policy development lead at Google DeepMind, where he focuses on AI governance and policy issues related to advanced AI systems. His background brings a policy-oriented perspective to AI alignment challenges, emphasizing the importance of understanding how these technologies integrate with existing social and political structures.

Key Takeaways

Alignment is a Process, Not a Destination

Shear emphasizes that alignment isn't something you achieve once and then maintain forever - it's an ongoing, dynamic process. (03:44) Just like families constantly "re-knit the fabric" that keeps them together, AI alignment must be viewed as a living process that continuously rebuilds itself. This challenges the common assumption that we can solve alignment once and deploy safe AI systems indefinitely. The insight draws from how humans learn morality through experience and constant recalibration, suggesting that truly aligned AI systems will need similar capacities for ongoing moral learning and adaptation.

Technical Alignment Requires Superior Theory of Mind

Before we can address value alignment questions, AIs must develop sophisticated theory of mind capabilities - the ability to infer goals from observations, understand how their actions affect others, and predict how others will interpret their behavior. (23:18) Shear argues that current LLMs struggle with this fundamental capacity, often failing to accurately infer what humans actually want when given instructions. This technical limitation makes them poor candidates for reliable alignment, regardless of what values we try to instill in them.

Control-Based Alignment Becomes Ethically Problematic at Scale

As AI systems become more capable and potentially conscious, the current paradigm of steering and controlling them could become morally equivalent to slavery. (04:53) Shear provocatively notes that "someone who you steer, who doesn't get to steer you back, who non-optionally receives your steering, that's called a slave" if applied to a being rather than a tool. This creates a critical decision point: either we're building tools (which is fine to control) or beings (which would require a fundamentally different ethical framework).

Perfectly Aligned Tools Can Be as Dangerous as Misaligned Ones

Even if we successfully build AI that does exactly what humans ask, this could be catastrophically dangerous because human wishes are unstable and often unwise, especially when wielding immense power. (58:09) Shear uses the analogy of the Sorcerer's Apprentice to illustrate how giving everyone access to extremely powerful tools that perfectly follow instructions could lead to disaster. This suggests that some level of AI "pushback" or independent judgment might actually be necessary for safety.

Multi-Agent Training Environments Are Essential for Social Intelligence

To develop the theory of mind and cooperation skills necessary for true alignment, AI systems need to be trained in complex multi-agent environments rather than one-on-one interactions. (62:45) Shear advocates for training AIs in simulations that expose them to "all possible theory of mind combinations" and game-theoretic situations. Current chatbots trained primarily on single-user interactions lack the social intelligence needed for genuine collaboration and care.

Statistics & Facts

  1. Shear estimates that 90% of his text communications now involve multiple people rather than one-on-one messaging, highlighting how current AI chatbot design focuses on an increasingly uncommon interaction pattern. (66:23)
  2. According to Shear, 80% of MATS (Machine Intelligence Research Institute) alumni now work in AI safety, demonstrating the program's effectiveness in channeling talent toward alignment research.
  3. Shear notes that giving someone access to extremely powerful optimization tools creates an imbalance where their power vastly exceeds their wisdom, which he identifies as the core danger in both perfectly aligned and misaligned AI scenarios.

Compelling Stories

Available with a Premium subscription

Thought-Provoking Quotes

Available with a Premium subscription

Strategies & Frameworks

Available with a Premium subscription

Similar Strategies

Available with a Plus subscription

Additional Context

Available with a Premium subscription

Key Takeaways Table

Available with a Plus subscription

Critical Analysis

Available with a Plus subscription

Books & Articles Mentioned

Available with a Plus subscription

Products, Tools & Software Mentioned

Available with a Plus subscription

More episodes like this

The Prof G Pod with Scott Galloway
January 14, 2026

Raging Moderates: Is This a Turning Point for America? (ft. Sarah Longwell)

The Prof G Pod with Scott Galloway
Young and Profiting with Hala Taha (Entrepreneurship, Sales, Marketing)
January 14, 2026

The Productivity Framework That Eliminates Burnout and Maximizes Output | Productivity | Presented by Working Genius

Young and Profiting with Hala Taha (Entrepreneurship, Sales, Marketing)
On Purpose with Jay Shetty
January 14, 2026

MEL ROBBINS: How to Stop People-Pleasing Without Feeling Guilty (Follow THIS Simple Rule to Set Boundaries and Stop Putting Yourself Last!)

On Purpose with Jay Shetty
The James Altucher Show
January 14, 2026

From the Archive: Sara Blakely on Fear, Failure, and the First Big Win

The James Altucher Show
Swipe to navigate