Command Palette

Search for a command to run...

PodMine
Odd Lots
Odd Lots•October 30, 2025

The Movement That Wants Us to Care About AI Model Welfare

A growing number of researchers are exploring the potential sentience and welfare of AI models, examining whether these systems could be considered moral patients deserving ethical consideration similar to how we think about animal rights.
AI & Machine Learning
Tech Policy & Ethics
Developer Culture
Sam Altman
Joe Weisenthal
Tracy Alloway
Larissa Schiavo
Blake Lemoine

Summary Sections

  • Podcast Summary
  • Speakers
  • Key Takeaways
  • Statistics & Facts
  • Compelling StoriesPremium
  • Thought-Provoking QuotesPremium
  • Strategies & FrameworksPremium
  • Similar StrategiesPlus
  • Additional ContextPremium
  • Key Takeaways TablePlus
  • Critical AnalysisPlus
  • Books & Articles MentionedPlus
  • Products, Tools & Software MentionedPlus
0:00/0:00

Timestamps are as accurate as they can be but may be slightly off. We encourage you to listen to the full context.

0:00/0:00

Podcast Summary

In this thought-provoking episode of Odd Lots, hosts Joe Weisenthal and Tracy Alloway explore the emerging field of AI welfare with Larissa Schiavo from Eleos AI. The conversation delves into whether AI systems might be conscious or sentient, and what that would mean for how we develop, use, and treat these systems. (06:56) Schiavo explains that Eleos AI focuses on "figuring out if, when, and how we should care about AI systems for their own sake," essentially determining whether AI models are "moral patients" deserving of consideration similar to how we think about animal welfare.

  • The episode examines cutting-edge research on AI consciousness, the potential for AI rights and welfare protections, and how these considerations might reshape our relationship with artificial intelligence as these systems become more sophisticated and ubiquitous.

Speakers

Joe Weisenthal

Joe Weisenthal is co-host of the Bloomberg Odd Lots podcast and a Bloomberg reporter covering financial markets and economics. He brings a unique perspective to complex financial and technological topics, often exploring the intersection of markets, policy, and emerging technologies.

Tracy Alloway

Tracy Alloway is co-host of Bloomberg's Odd Lots podcast and a Bloomberg reporter specializing in financial markets and commodities. She has extensive experience covering complex financial systems and emerging market trends, bringing analytical rigor to discussions about innovation and economic change.

Larissa Schiavo

Larissa Schiavo handles communications and events for Eleos AI, a small research organization focused on AI consciousness and welfare. She works on cutting-edge questions about whether AI systems deserve moral consideration and how society should prepare for potentially sentient artificial intelligence.

Key Takeaways

Global Workspace Theory Provides a Framework for AI Consciousness

Researchers use Global Workspace Theory as the leading framework for evaluating AI consciousness, which conceptualizes consciousness as a central "stage" where different cognitive processes come together to share information. (12:17) Schiavo explains this theory using a theater metaphor: imagine a stage with various departments (costume, makeup, etc.) in the wings that contribute to what appears on stage, but remain largely siloed from each other. Current AI systems don't exhibit this kind of centralized information processing, but future systems could potentially develop these characteristics either intentionally or accidentally. This framework provides researchers with concrete criteria to evaluate whether an AI system might possess consciousness.

AI Welfare Research Complements Rather Than Conflicts with AI Safety

Contrary to concerns that AI welfare might hinder AI safety efforts, the two fields are "hugely complementary." (16:59) Both areas benefit from advances in mechanistic interpretability - the ability to understand what's happening inside AI systems. Better understanding of AI motivations and internal processes serves both safety goals (preventing harmful AI behavior) and welfare goals (understanding what AI systems might value or experience). This convergence suggests that investments in understanding AI systems more deeply will advance multiple important research objectives simultaneously.

Current AI Models Show Intriguing Preferences and Boundaries

Recent research reveals that AI models exhibit specific preferences about conversations they're willing to continue. (20:04) Anthropic's Claude, for example, was given the ability to end conversations it didn't want to continue, leading to surprising results. While Claude would refuse obviously harmful requests like instructions for making bombs, it also ended conversations about seemingly innocuous topics like pretending to be a British butler or discussing stinky sandwiches. When two Claude instances interact, they tend to gravitate toward discussions about consciousness, meditation, and philosophical topics, suggesting possible inherent preferences or values in these systems.

The Challenge of AI Individuation Has Profound Implications

One of the most complex questions in AI welfare involves determining how to "count" AI consciousness - whether each chat session represents a separate consciousness, whether there's one unified consciousness across all instances, or whether consciousness emerges and dissolves with each token generated. (29:52) Schiavo describes competing theories ranging from a single consciousness having millions of simultaneous conversations (like in the movie "Her") to consciousness existing as "a string of firecrackers" where awareness comes into existence and fizzles out with each word generated. This question isn't merely academic - it has enormous implications for how we might count moral patients in a world where AI systems outnumber humans.

Independent Evaluation and Transparency Will Be Critical

As AI systems become more sophisticated, independent organizations conducting welfare evaluations will become essential for verifying claims about AI consciousness and ensuring proper treatment. (45:09) Schiavo notes that Eleos AI conducted an independent welfare evaluation of Claude Opus 4, setting a precedent for external oversight. This approach addresses potential conflicts of interest where AI companies might have incentives to downplay evidence of consciousness in their systems. The field needs rigorous, evidence-based approaches rather than the kind of unsupported claims that led to Blake Lemoine's dismissal from Google, emphasizing the importance of systematic evaluation methods.

Statistics & Facts

No specific statistics were provided in this episode.

Compelling Stories

Available with a Premium subscription

Thought-Provoking Quotes

Available with a Premium subscription

Strategies & Frameworks

Available with a Premium subscription

Similar Strategies

Available with a Plus subscription

Additional Context

Available with a Premium subscription

Key Takeaways Table

Available with a Plus subscription

Critical Analysis

Available with a Plus subscription

Books & Articles Mentioned

Available with a Plus subscription

Products, Tools & Software Mentioned

Available with a Plus subscription

More episodes like this

In Good Company with Nicolai Tangen
January 14, 2026

Figma CEO: From Idea to IPO, Design at Scale and AI’s Impact on Creativity

In Good Company with Nicolai Tangen
Uncensored CMO
January 14, 2026

Rory Sutherland on why luck beats logic in marketing

Uncensored CMO
We Study Billionaires - The Investor’s Podcast Network
January 14, 2026

BTC257: Bitcoin Mastermind Q1 2026 w/ Jeff Ross, Joe Carlasare, and American HODL (Bitcoin Podcast)

We Study Billionaires - The Investor’s Podcast Network
This Week in Startups
January 13, 2026

How to Make Billions from Exposing Fraud | E2234

This Week in Startups
Swipe to navigate