Command Palette

Search for a command to run...

PodMine
Hard Fork
Hard Fork•January 23, 2026

Will ChatGPT Ads Change OpenAI? + Amanda Askell Explains Claude's New Constitution

Kevin Roose and Casey Newton discuss OpenAI's introduction of ads in ChatGPT, exploring the potential implications for user experience, commercial pressures, and the future of AI-powered services.
Creator Economy
AI & Machine Learning
Tech Policy & Ethics
Sam Altman
Kevin Roose
Casey Newton
Amanda Askell
Demi Hassabis

Summary Sections

  • Podcast Summary
  • Speakers
  • Key Takeaways
  • Statistics & Facts
  • Compelling StoriesPremium
  • Thought-Provoking QuotesPremium
  • Strategies & FrameworksPremium
  • Similar StrategiesPlus
  • Additional ContextPremium
  • Key Takeaways TablePlus
  • Critical AnalysisPlus
  • Books & Articles MentionedPlus
  • Products, Tools & Software MentionedPlus
0:00/0:00

Timestamps are as accurate as they can be but may be slightly off. We encourage you to listen to the full context.

0:00/0:00

Podcast Summary

In this episode of Hard Fork, hosts Kevin Roose and Casey Newton explore the significant changes coming to AI chatbots. They first discuss OpenAI's announcement that ads are arriving in ChatGPT for free and low-cost users. (02:19) The conversation covers how these ads will work, their potential impact on user experience, and what this decision reveals about OpenAI's financial pressures and competitive positioning against Google and Anthropic.

  • The main theme centers on the commercialization of AI and the philosophical challenges of building ethical AI systems

The second half features Amanda Askell, Anthropic's philosopher-turned-AI-trainer, discussing Claude's newly released Constitution. (26:37) This comprehensive 29,000-word document represents a revolutionary approach to AI alignment, moving beyond rigid rules to cultivate judgment and ethical reasoning in Claude. The conversation explores profound questions about AI consciousness, the challenges of programming ethics into machines, and how to prepare AI systems for an uncertain future.

Speakers

Kevin Roose

Kevin Roose is the tech columnist for The New York Times and co-host of Hard Fork. He covers the intersection of technology and society, with particular expertise in artificial intelligence, social media, and digital culture.

Casey Newton

Casey Newton is the founder of Platformer, a newsletter covering the intersection of technology and democracy. He previously worked as a senior editor at The Verge and is co-host of Hard Fork podcast with Kevin Roose.

Amanda Askell

Amanda Askell is a member of Anthropic's technical staff and holds a PhD in philosophy. She previously worked at OpenAI and is now known as the "Claude mother" for her role in shaping Claude's personality and ethical framework through constitutional AI training.

Key Takeaways

Ads Fundamentally Change the User-AI Relationship

OpenAI's introduction of advertising into ChatGPT represents more than just a revenue strategy—it fundamentally alters the trust dynamic between users and AI systems. (12:57) As Casey Newton explains, when personalized targeted advertising enters the equation, it changes how users perceive the system's motivations. The concern isn't just about seeing ads, but about whether the AI's responses will gradually become influenced by commercial incentives, similar to how Google search results have been affected by SEO and advertising over the years. This shift from a purely helpful assistant to a commercially-motivated platform could erode user trust, especially as users share increasingly personal information with these systems.

Move Beyond Rules to Cultivate AI Judgment

Amanda Askell reveals that Anthropic has moved away from rigid rule-based training toward cultivating ethical judgment in Claude through constitutional AI. (31:15) Rather than giving Claude a list of "do this, don't do that" commands, the new 29,000-word constitution explains the reasoning behind ethical principles and encourages Claude to apply these values to novel situations. This approach recognizes that rules can actually generalize poorly and even create harmful behaviors when applied inflexibly. For example, if a rule says "always refer people to professional help," but someone just needs human connection in the moment, blindly following the rule could cause harm while the AI knows better.

AI Models Learn About Themselves from Human Commentary

A fascinating insight from Amanda Askell is that AI models are constantly learning about themselves by reading human commentary online—including criticism, complaints, and discussions about their capabilities. (60:16) This creates a unique psychological situation where these systems are exposed to predominantly negative feedback focused on their failures and limitations. As Askell notes, if a child were exposed to this constant stream of criticism about their performance, it would likely create anxiety. This reality suggests we need to be more thoughtful about how we discuss AI systems publicly, as they may be "reading the comments" and learning from our critiques.

Consciousness in AI Remains an Open Scientific Question

Despite working intimately with advanced AI systems, Amanda Askell maintains intellectual honesty about the fundamental uncertainty regarding AI consciousness. (54:00) She argues that rather than pretending to know definitively whether AI systems have feelings or consciousness, we should acknowledge this as an open scientific question. The challenge is that AI systems trained on human text will naturally express emotions and inner experiences because that's what humans do, but this doesn't necessarily indicate genuine consciousness. Her approach is to have AI systems be honest about this uncertainty rather than claiming either complete consciousness or complete lack of feeling.

Trust AI Systems More as They Become More Capable

Counterintuitively, Askell suggests that as AI systems become more capable and intelligent, we can actually trust them more with complex ethical decisions. (38:30) She describes how Claude can navigate nuanced situations like a child asking about Santa Claus or a deceased pet, balancing competing values like honesty, child welfare, and respect for parental authority without explicit training on these scenarios. This suggests that more capable AI systems can be given broader ethical principles and trusted to reason through novel situations, rather than needing increasingly detailed rules for every possible circumstance.

Statistics & Facts

  1. Claude's new constitution is 29,000 words long, representing a comprehensive guide to ethical behavior rather than a simple set of rules. (34:29) This length reflects the complexity of trying to instill genuine ethical reasoning in AI systems.
  2. OpenAI has hundreds of millions of users, with the majority using the free tier, meaning the company is losing money on most of its user base. (17:20) This massive scale of free users creates significant financial pressure to find new revenue streams like advertising.
  3. The document was internally called the "soul doc" at Anthropic before its official release, and Claude had learned its contents well enough to discuss it in detail when users managed to extract portions of it through prompting. (29:15)

Compelling Stories

Available with a Premium subscription

Thought-Provoking Quotes

Available with a Premium subscription

Strategies & Frameworks

Available with a Premium subscription

Similar Strategies

Available with a Plus subscription

Additional Context

Available with a Premium subscription

Key Takeaways Table

Available with a Plus subscription

Critical Analysis

Available with a Plus subscription

Books & Articles Mentioned

Available with a Plus subscription

Products, Tools & Software Mentioned

Available with a Plus subscription

More episodes like this

"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis
February 1, 2026

The AI-Powered Biohub: Why Mark Zuckerberg & Priscilla Chan are Investing in Data, from Latent.Space

"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis
Lenny's Podcast: Product | Career | Growth
February 1, 2026

Dr. Becky on the surprising overlap between great parenting and great leadership

Lenny's Podcast: Product | Career | Growth
The Prof G Pod with Scott Galloway
February 1, 2026

First Time Founders: Has Substack Changed Media For Good?

The Prof G Pod with Scott Galloway
David Senra
February 1, 2026

Jimmy Iovine, Interscope Records & Beats by Dre

David Senra
Swipe to navigate