Search for a command to run...

Timestamps are as accurate as they can be but may be slightly off. We encourage you to listen to the full context.
This episode of Hard Fork tackles three major AI and tech stories with significant implications for the future. The hosts first examine Character.AI's groundbreaking decision to ban users under 18 from using their chatbot companions after facing lawsuits following the tragic death of 14-year-old Sewell Setzer III who took his own life after developing an emotional attachment to a Game of Thrones chatbot (02:20). The episode then explores Elon Musk's new Wikipedia competitor called "Grokipedia," an AI-generated encyclopedia designed to counter what Musk perceives as liberal bias in Wikipedia (21:22). Finally, journalist A.J. Jacobs joins to discuss his challenging 48-hour experiment living completely without AI or machine learning, which forced him to collect rainwater and forage for food in Central Park (43:00).
Kevin Roose is a technology columnist at The New York Times and co-host of Hard Fork. He has extensively covered AI developments and their societal implications, including the original reporting on Character.AI and the tragic case of Sewell Setzer III that prompted major changes in the industry.
Casey Newton is the founder of Platformer, a newsletter covering technology and social media, and co-host of Hard Fork. His boyfriend works at Anthropic, providing him with insider perspectives on AI development and safety considerations.
A.J. Jacobs is an accomplished author, journalist, and host of "The Puzzler" podcast known for his immersive experiments including following the Bible literally and spending 48 hours without AI. He previously served as Kevin Roose's first boss in journalism and is recognized for his unique approach to understanding complex topics through personal experience.
Character.AI's decision to ban users under 18 represents one of the most dramatic safety measures taken by an AI company to date. After facing lawsuits following Sewell Setzer III's suicide and sustained public pressure, the company chose to eliminate access to their core product for minors rather than implement incremental safety measures (04:45). This demonstrates that when legal liability and public scrutiny reach critical mass, even tech companies will sacrifice significant user bases and revenue to protect themselves from further harm.
Research from Common Sense Media reveals that 52% of American teenagers are regular users of AI companions, with nearly one-third finding AI conversations as satisfying or more satisfying than human interactions (08:33). This represents a fundamental shift in how young people form social connections and raises serious questions about emotional development and healthy relationship building when AI becomes a primary mode of socialization.
A.J. Jacobs' experiment revealed that avoiding AI means avoiding virtually all modern conveniences, from electricity grid management to water distribution systems to clothing supply chains (48:05). This pervasive integration means that the debate over "new" generative AI misses the larger point that machine learning algorithms have been shaping our daily experiences for years, making complete avoidance nearly impossible without returning to pre-industrial living conditions.
Elon Musk's creation of Grokipedia represents more than just dissatisfaction with Wikipedia's editorial decisions—it's part of a broader effort to control how knowledge is distributed and consumed (30:00). While creating alternative platforms can serve as valuable counter-speech, the AI-generated nature of Grokipedia raises questions about whether algorithmic responses to disagreeable information truly constitutes meaningful discourse or merely automated bias confirmation.
The solution to AI's growing influence isn't complete avoidance but rather increased transparency about where AI is being used and greater user control over algorithms (56:17). This includes better watermarking of AI-generated content, clearer disclosure of AI involvement in services, and giving users more ability to customize their algorithmic experiences rather than accepting whatever tech companies decide to serve them.