Search for a command to run...

Timestamps are as accurate as they can be but may be slightly off. We encourage you to listen to the full context.
In this episode of Decoder, Neil Patel speaks with Verge senior AI reporter Hayden Field about Anthropic's unique societal impacts team—a group of just nine people tasked with studying how AI might impact society at large. (05:43) The team is responsible for investigating and publishing "inconvenient truths" about AI's effects on jobs, mental health, elections, and the broader economy. (21:29) The conversation explores the tension between conducting meaningful safety research and maintaining competitiveness in the AI race, especially as the Trump administration's anti-"woke AI" policies create new pressures for companies like Anthropic that have built their reputation on responsible AI development.
Neil Patel is the editor-in-chief of The Verge and host of the Decoder podcast. He leads The Verge's editorial coverage of technology, business, and culture, bringing deep expertise in analyzing how technology companies operate and make strategic decisions.
Hayden Field is a senior AI reporter at The Verge who has been covering artificial intelligence for six years. She specializes in investigating AI companies, their internal dynamics, and the broader societal implications of AI technology deployment across various industries.
Anthropic's societal impacts team serves both genuine safety research purposes and strategic business interests. (08:33) While the team does produce valuable research exposing AI's potential harms, it also helps the company avoid federal regulation by demonstrating self-governance and appeals to enterprise clients who want to work with a "responsible" AI provider. This dual nature creates tension between authentic safety research and marketing positioning that could compromise the team's independence over time.
Only nine people at Anthropic—out of more than 2,000 employees—are dedicated to studying AI's broad societal impacts, and no other AI lab has a comparable team. (05:53) This reveals a significant gap in the industry's approach to understanding how AI will affect jobs, mental health, democratic processes, and economic structures. The disproportionately small size of this team relative to Anthropic's overall workforce suggests that studying societal impacts remains a secondary priority despite public claims about responsible AI development.
Despite producing damning research about their own company's technology, the societal impacts team lacks authority to slow product releases or mandate specific changes. (24:02) Team members expressed frustration that their research doesn't have greater direct impact on Anthropic's products, though they maintain open communication with other teams. This limitation highlights a fundamental tension in AI companies between moving fast to stay competitive and implementing safety measures that could slow development.
The Trump administration's executive order against "woke AI" creates existential pressure for teams studying AI's societal impacts. (35:10) While Anthropic CEO Dario Amodei had to publicly reassure the administration of the company's alignment with American interests, the vague definition of "woke" as "pervasive and destructive ideologies" could easily encompass research that reveals negative AI impacts. This political dynamic mirrors what happened to social media trust and safety teams, which were largely dismantled after similar political pressure.
Even safety-focused companies like Anthropic justify controversial decisions by arguing they must stay competitive to guide AI development responsibly. (21:57) Dario Amodei's internal memo about accepting Saudi funding exemplifies this pattern, stating that "no bad person should ever benefit from our success is a pretty difficult principle to run a business on." This reveals how competitive pressure can gradually erode safety commitments, as companies rationalize increasingly questionable decisions as necessary to remain relevant in shaping AI's future.