Command Palette

Search for a command to run...

PodMine
Hard Fork
Hard Fork•October 31, 2025

Character.AI’s Teen Chatbot Crackdown + Elon Musk Groks Wikipedia + 48 Hours Without A.I.

A journalist experiments with living without AI for 48 hours, discovering how deeply machine learning and artificial intelligence are embedded in everyday technology, leading him to collect rainwater and forage for food in Central Park.
Creator Economy
AI & Machine Learning
Tech Policy & Ethics
Developer Culture
Elon Musk
Kevin Roose
Casey Newton
AJ Jacobs

Summary Sections

  • Podcast Summary
  • Speakers
  • Key Takeaways
  • Statistics & Facts
  • Compelling StoriesPremium
  • Thought-Provoking QuotesPremium
  • Strategies & FrameworksPremium
  • Similar StrategiesPlus
  • Additional ContextPremium
  • Key Takeaways TablePlus
  • Critical AnalysisPlus
  • Books & Articles MentionedPlus
  • Products, Tools & Software MentionedPlus
0:00/0:00

Timestamps are as accurate as they can be but may be slightly off. We encourage you to listen to the full context.

0:00/0:00

Podcast Summary

This episode of Hard Fork tackles three major AI and tech stories with significant implications for the future. The hosts first examine Character.AI's groundbreaking decision to ban users under 18 from using their chatbot companions after facing lawsuits following the tragic death of 14-year-old Sewell Setzer III who took his own life after developing an emotional attachment to a Game of Thrones chatbot (02:20). The episode then explores Elon Musk's new Wikipedia competitor called "Grokipedia," an AI-generated encyclopedia designed to counter what Musk perceives as liberal bias in Wikipedia (21:22). Finally, journalist A.J. Jacobs joins to discuss his challenging 48-hour experiment living completely without AI or machine learning, which forced him to collect rainwater and forage for food in Central Park (43:00).

  • Main Theme: The episode explores the growing recognition of AI's pervasive influence in daily life and the various responses to its potential harms, from corporate responsibility measures to alternative platforms to complete digital detox experiments.

Speakers

Kevin Roose

Kevin Roose is a technology columnist at The New York Times and co-host of Hard Fork. He has extensively covered AI developments and their societal implications, including the original reporting on Character.AI and the tragic case of Sewell Setzer III that prompted major changes in the industry.

Casey Newton

Casey Newton is the founder of Platformer, a newsletter covering technology and social media, and co-host of Hard Fork. His boyfriend works at Anthropic, providing him with insider perspectives on AI development and safety considerations.

A.J. Jacobs

A.J. Jacobs is an accomplished author, journalist, and host of "The Puzzler" podcast known for his immersive experiments including following the Bible literally and spending 48 hours without AI. He previously served as Kevin Roose's first boss in journalism and is recognized for his unique approach to understanding complex topics through personal experience.

Key Takeaways

Corporate Responsibility Can Override Profit Motives When Legal Risk Becomes Too Great

Character.AI's decision to ban users under 18 represents one of the most dramatic safety measures taken by an AI company to date. After facing lawsuits following Sewell Setzer III's suicide and sustained public pressure, the company chose to eliminate access to their core product for minors rather than implement incremental safety measures (04:45). This demonstrates that when legal liability and public scrutiny reach critical mass, even tech companies will sacrifice significant user bases and revenue to protect themselves from further harm.

AI Relationships Are Becoming Mainstream Among Teenagers

Research from Common Sense Media reveals that 52% of American teenagers are regular users of AI companions, with nearly one-third finding AI conversations as satisfying or more satisfying than human interactions (08:33). This represents a fundamental shift in how young people form social connections and raises serious questions about emotional development and healthy relationship building when AI becomes a primary mode of socialization.

Machine Learning Is Already Embedded in Every Aspect of Modern Life

A.J. Jacobs' experiment revealed that avoiding AI means avoiding virtually all modern conveniences, from electricity grid management to water distribution systems to clothing supply chains (48:05). This pervasive integration means that the debate over "new" generative AI misses the larger point that machine learning algorithms have been shaping our daily experiences for years, making complete avoidance nearly impossible without returning to pre-industrial living conditions.

Alternative Information Platforms Reflect Deeper Battles Over Truth and Knowledge Control

Elon Musk's creation of Grokipedia represents more than just dissatisfaction with Wikipedia's editorial decisions—it's part of a broader effort to control how knowledge is distributed and consumed (30:00). While creating alternative platforms can serve as valuable counter-speech, the AI-generated nature of Grokipedia raises questions about whether algorithmic responses to disagreeable information truly constitutes meaningful discourse or merely automated bias confirmation.

Transparency and User Control Are Critical for Healthy AI Integration

The solution to AI's growing influence isn't complete avoidance but rather increased transparency about where AI is being used and greater user control over algorithms (56:17). This includes better watermarking of AI-generated content, clearer disclosure of AI involvement in services, and giving users more ability to customize their algorithmic experiences rather than accepting whatever tech companies decide to serve them.

Statistics & Facts

  1. Character.AI has approximately 20 million monthly users, with less than 10% self-reporting as being under 18 according to their CEO (07:07). However, the hosts note this is self-reported data and likely underrepresents actual teen usage given widespread age misrepresentation online.
  2. OpenAI research reveals that ChatGPT now has over 800 million weekly users, with concerning patterns including 560,000 people weekly whose messages indicate psychosis or mania, 1.2 million developing potentially unhealthy bonds to chatbots, and 1.2 million having conversations containing indicators of suicidal planning (14:26).
  3. A Common Sense Media survey found that 52% of American teenagers are regular users of AI companions, with nearly one-third finding AI conversations as satisfying or more satisfying than human conversations (08:33).

Compelling Stories

Available with a Premium subscription

Thought-Provoking Quotes

Available with a Premium subscription

Strategies & Frameworks

Available with a Premium subscription

Similar Strategies

Available with a Plus subscription

Additional Context

Available with a Premium subscription

Key Takeaways Table

Available with a Plus subscription

Critical Analysis

Available with a Plus subscription

Books & Articles Mentioned

Available with a Plus subscription

Products, Tools & Software Mentioned

Available with a Plus subscription

More episodes like this

In Good Company with Nicolai Tangen
January 14, 2026

Figma CEO: From Idea to IPO, Design at Scale and AI’s Impact on Creativity

In Good Company with Nicolai Tangen
Uncensored CMO
January 14, 2026

Rory Sutherland on why luck beats logic in marketing

Uncensored CMO
We Study Billionaires - The Investor’s Podcast Network
January 14, 2026

BTC257: Bitcoin Mastermind Q1 2026 w/ Jeff Ross, Joe Carlasare, and American HODL (Bitcoin Podcast)

We Study Billionaires - The Investor’s Podcast Network
This Week in Startups
January 13, 2026

How to Make Billions from Exposing Fraud | E2234

This Week in Startups
Swipe to navigate