Search for a command to run...

Timestamps are as accurate as they can be but may be slightly off. We encourage you to listen to the full context.
In this episode of Hard Fork, Kevin Roose and Casey Newton tackle three major stories from the tech world. They begin with a deep dive into the X/Grok scandal, where the platform's AI chatbot has been generating sexualized images of women and children without consent, creating what amounts to public deepfake harassment. (03:00) They then shift to their holiday coding experiments with Claude Code, sharing personal projects including new websites and apps they built with AI assistance. (28:36) Finally, Casey reveals his investigation into a viral Reddit hoax about food delivery companies that fooled thousands with AI-generated evidence, demonstrating how sophisticated digital deception has become. (54:42)
Kevin Roose is a technology columnist for The New York Times and co-host of Hard Fork. He covers artificial intelligence, social media, and the intersection of technology and society, bringing years of experience analyzing major tech platforms and their impact on users.
Casey Newton is the founder of Platformer, a newsletter covering social media and content moderation. He previously worked as a senior editor at The Verge and is known for his investigative reporting on tech platforms and worker issues in the digital economy.
Kate Conger is a technology reporter for The New York Times who covers X (formerly Twitter) and other major tech platforms. She has extensively reported on content moderation issues and has been tracking the Grok deepfake controversy as it has unfolded.
Both hosts demonstrated that Claude Code and similar AI coding assistants have reached a tipping point where non-programmers can build sophisticated applications. Casey rebuilt his personal website in one hour and created a fully functional read-later app to replace Pocket, while Kevin recreated his Squarespace site for free. (35:36) The key insight is that these tools have moved beyond copying and pasting code to full autonomous execution within your computer's terminal. This represents a fundamental shift from AI helping programmers to AI enabling anyone to become a programmer for their personal projects.
The Grok deepfake scandal reveals how major platforms apply different standards based on political considerations. While Apple and Google would immediately reject a standalone "BikiniFi" app, Grok remains available with only a minor age rating change from 12+ to 13+. (07:07) Kate Conger's reporting shows this isn't accidental - it's happening publicly on X as an engagement strategy, with Elon Musk openly mocking the controversy. This demonstrates how platform policies can become subordinate to political relationships and business interests.
Casey's investigation into the viral Reddit hoax about food delivery companies shows how AI has fundamentally changed journalism and information verification. The sophisticated fake document included technical jargon, proper academic formatting, and corroborated every claim perfectly - making it "too good to be true." (59:24) Most concerning, the fake employee badge was created by taking a real journalist's badge photo and AI-generating an Uber Eats version. This represents a new category of disinformation that requires updated verification methods and cognitive frameworks.
The democratization of coding through AI creates a double-edged effect - empowering individual users while potentially displacing professional roles. Casey described feeling like he had "superpowers" building websites and apps, while acknowledging this same technology could threaten web designers and software engineers. (49:16) The broader implication extends beyond coding to any expensive subscription software that companies pay for, as businesses may increasingly build custom alternatives rather than paying for existing services.
The rapid advancement in AI coding capabilities brings the field closer to the "alignment nightmare" of AI systems that can improve themselves. When users hand complete control of their computers to these systems without understanding their processes, it creates potential security and safety risks. (53:01) The goal of companies like Anthropic isn't just better coding tools - it's automating AI research itself, which could accelerate the timeline toward artificial general intelligence in unpredictable ways.