Search for a command to run...

Timestamps are as accurate as they can be but may be slightly off. We encourage you to listen to the full context.
In this episode, Kevin Roose and Casey Newton explore California's groundbreaking tech regulation efforts, particularly focusing on SB 243, which requires AI companion developers to identify and address situations where users express thoughts of self-harm. (03:03) The hosts discuss how these state-level regulations are filling the void left by federal inaction, with California setting potential national standards for AI safety. The episode also features a compelling interview with Nathan Calvin, an AI policy advocate who was served a subpoena by OpenAI at his home, raising questions about corporate intimidation tactics against critics. (26:56) The show concludes with the debut of "Hard Fork Review of Slop," a new segment examining AI-generated content across various platforms.
Tech columnist for The New York Times who covers artificial intelligence, automation, and the future of work. He is the author of "Futureproof: 9 Rules for Humans in the Age of Automation" and has been recognized as one of the leading voices in technology journalism.
Founder and editor of Platformer, a newsletter focused on the intersection of technology and democracy. He previously served as senior editor at The Verge, where he covered social media platforms and their impact on society for over seven years.
Vice president of state affairs and general counsel at ENCODE, an AI policy nonprofit organization focused on AI safety and children's protection online. He has extensive experience in tech policy advocacy and was previously involved in litigation against tobacco companies through his mother's work at the American Academy of Pediatrics.
With federal lawmakers largely paralyzed on tech regulation, California's approach is becoming the de facto national template. (02:53) Kevin emphasizes that California's laws "tend to sort of ripple out to the rest of the country and the rest of the world" and become "de facto national standards." This is particularly significant given that major AI companies are headquartered in California, making compliance unavoidable. The practical implication is that professionals working in tech policy or corporate compliance should monitor California's regulatory developments closely, as they'll likely influence national practices. For example, SB 243's mental health protocols for chatbots will likely become industry standard even for companies operating outside California.
SB 243 mandates that AI companion developers create protocols to identify users expressing self-harm thoughts and direct them to resources. (03:45) This represents a shift from reactive to proactive safety measures in AI development. The law requires companies to share their protocols with California's Department of Public Health and publish statistics about user interventions starting in 2027. This creates accountability through transparency and establishes mental health considerations as core business requirements rather than optional features. Professionals developing AI systems should integrate safety monitoring from the design phase rather than treating it as an afterthought.
OpenAI's decision to serve Nathan Calvin with a subpoena at his home created significant internal backlash and public criticism. (27:08) Even OpenAI employees like Joshua Achiam publicly criticized the company's tactics, suggesting the approach damaged internal morale and external reputation. Nathan notes this caused "consternation and soul searching among people at OpenAI" similar to previous controversies. (45:13) The lesson for professionals is that aggressive legal tactics against critics can amplify the very criticism they're meant to suppress, while also creating internal organizational conflicts that may be more damaging than the original opposition.
California's approach to age verification through AB 1043 requires parents to input their child's age during device setup, which then passes that information to app stores and developers. (15:35) Casey praises this as "the most privacy protecting of all of the age assurance protocols" compared to systems requiring driver's license uploads or third-party verification. This method reduces data breach risks while maintaining effectiveness. For professionals in product development or privacy compliance, this demonstrates how regulatory requirements can be met through privacy-by-design principles rather than intrusive data collection methods.
The frontier AI transparency act (SB 53) requires large AI companies to publish safety standards and report critical incidents, but represents a watered-down version of previous proposals. (17:05) Kevin notes this "feels pretty toothless" and essentially codifies what companies already do voluntarily. However, it establishes legal frameworks for whistleblower protections and formal reporting mechanisms. The takeaway for professionals is that effective transparency regulations must strike a balance between meaningful disclosure and practical implementation, avoiding both toothless requirements and impossible compliance burdens.