Command Palette

Search for a command to run...

PodMine
Hard Fork
Hard Fork•October 17, 2025

California Regulates A.I. Companions + OpenAI Investigates Its Critics + The Hard Fork Review of Slop

California passes new AI and tech regulations, including bills on AI companion safety, deepfake protections, and social media warning labels for minors.

Summary Sections

  • Podcast Summary
  • Speakers
  • Key Takeaways
  • Statistics & Facts
  • Compelling StoriesPremium
  • Thought-Provoking QuotesPremium
  • Strategies & FrameworksPremium
  • Similar StrategiesPlus
  • Additional ContextPremium
  • Key Takeaways TablePlus
  • Critical AnalysisPlus
  • Books & Articles MentionedPlus
  • Products, Tools & Software MentionedPlus
0:00/0:00

Timestamps are as accurate as they can be but may be slightly off. We encourage you to listen to the full context.

0:00/0:00

Podcast Summary

In this episode, Kevin Roose and Casey Newton explore California's groundbreaking tech regulation efforts, particularly focusing on SB 243, which requires AI companion developers to identify and address situations where users express thoughts of self-harm. (03:03) The hosts discuss how these state-level regulations are filling the void left by federal inaction, with California setting potential national standards for AI safety. The episode also features a compelling interview with Nathan Calvin, an AI policy advocate who was served a subpoena by OpenAI at his home, raising questions about corporate intimidation tactics against critics. (26:56) The show concludes with the debut of "Hard Fork Review of Slop," a new segment examining AI-generated content across various platforms.

  • California passed multiple AI and social media regulation bills, with particular focus on mental health protections for chatbot users

Speakers

Kevin Roose

Tech columnist for The New York Times who covers artificial intelligence, automation, and the future of work. He is the author of "Futureproof: 9 Rules for Humans in the Age of Automation" and has been recognized as one of the leading voices in technology journalism.

Casey Newton

Founder and editor of Platformer, a newsletter focused on the intersection of technology and democracy. He previously served as senior editor at The Verge, where he covered social media platforms and their impact on society for over seven years.

Nathan Calvin

Vice president of state affairs and general counsel at ENCODE, an AI policy nonprofit organization focused on AI safety and children's protection online. He has extensive experience in tech policy advocacy and was previously involved in litigation against tobacco companies through his mother's work at the American Academy of Pediatrics.

Key Takeaways

State-Level Regulation Is Becoming the New Federal Standard

With federal lawmakers largely paralyzed on tech regulation, California's approach is becoming the de facto national template. (02:53) Kevin emphasizes that California's laws "tend to sort of ripple out to the rest of the country and the rest of the world" and become "de facto national standards." This is particularly significant given that major AI companies are headquartered in California, making compliance unavoidable. The practical implication is that professionals working in tech policy or corporate compliance should monitor California's regulatory developments closely, as they'll likely influence national practices. For example, SB 243's mental health protocols for chatbots will likely become industry standard even for companies operating outside California.

AI Companion Safety Requires Proactive Monitoring Systems

SB 243 mandates that AI companion developers create protocols to identify users expressing self-harm thoughts and direct them to resources. (03:45) This represents a shift from reactive to proactive safety measures in AI development. The law requires companies to share their protocols with California's Department of Public Health and publish statistics about user interventions starting in 2027. This creates accountability through transparency and establishes mental health considerations as core business requirements rather than optional features. Professionals developing AI systems should integrate safety monitoring from the design phase rather than treating it as an afterthought.

Corporate Legal Intimidation Can Backfire Spectacularly

OpenAI's decision to serve Nathan Calvin with a subpoena at his home created significant internal backlash and public criticism. (27:08) Even OpenAI employees like Joshua Achiam publicly criticized the company's tactics, suggesting the approach damaged internal morale and external reputation. Nathan notes this caused "consternation and soul searching among people at OpenAI" similar to previous controversies. (45:13) The lesson for professionals is that aggressive legal tactics against critics can amplify the very criticism they're meant to suppress, while also creating internal organizational conflicts that may be more damaging than the original opposition.

Age Verification Should Prioritize Privacy Protection

California's approach to age verification through AB 1043 requires parents to input their child's age during device setup, which then passes that information to app stores and developers. (15:35) Casey praises this as "the most privacy protecting of all of the age assurance protocols" compared to systems requiring driver's license uploads or third-party verification. This method reduces data breach risks while maintaining effectiveness. For professionals in product development or privacy compliance, this demonstrates how regulatory requirements can be met through privacy-by-design principles rather than intrusive data collection methods.

Transparency Requirements Must Balance Disclosure with Operational Reality

The frontier AI transparency act (SB 53) requires large AI companies to publish safety standards and report critical incidents, but represents a watered-down version of previous proposals. (17:05) Kevin notes this "feels pretty toothless" and essentially codifies what companies already do voluntarily. However, it establishes legal frameworks for whistleblower protections and formal reporting mechanisms. The takeaway for professionals is that effective transparency regulations must strike a balance between meaningful disclosure and practical implementation, avoiding both toothless requirements and impossible compliance burdens.

Statistics & Facts

  1. California will require platforms to display a 30-second non-bypassable warning covering at least 75% of the screen after three hours of social media use by minors, with additional warnings every hour thereafter. (13:21)
  2. A study of 6,000 13-year-old children found that increased daily social media use was associated with decreased reading abilities. (14:26)
  3. Starting in 2027, California's Department of Public Health will publish data about how often AI companion developers direct users to mental health resources. (04:05)

Compelling Stories

Available with a Premium subscription

Thought-Provoking Quotes

Available with a Premium subscription

Strategies & Frameworks

Available with a Premium subscription

Similar Strategies

Available with a Plus subscription

Additional Context

Available with a Premium subscription

Key Takeaways Table

Available with a Plus subscription

Critical Analysis

Available with a Plus subscription

Books & Articles Mentioned

Available with a Plus subscription

Products, Tools & Software Mentioned

Available with a Plus subscription