Command Palette

Search for a command to run...

PodMine
Hard Fork
Hard Fork•January 9, 2026

Grok’s Undressing Scandal + Claude Code Capers + Casey Busts a Reddit Hoax

A deep dive into the viral Grok AI scandal of undressing images without consent, a look at Casey and Kevin's vibe coding experiments with Claude Code, and an investigation into a food delivery hoax that fooled Reddit and social media.
AI & Machine Learning
Tech Policy & Ethics
Developer Culture
Web3 & Crypto
Elon Musk
Kevin Roose
Casey Newton
Kate Conger

Summary Sections

  • Podcast Summary
  • Speakers
  • Key Takeaways
  • Statistics & Facts
  • Compelling StoriesPremium
  • Thought-Provoking QuotesPremium
  • Strategies & FrameworksPremium
  • Similar StrategiesPlus
  • Additional ContextPremium
  • Key Takeaways TablePlus
  • Critical AnalysisPlus
  • Books & Articles MentionedPlus
  • Products, Tools & Software MentionedPlus
0:00/0:00

Timestamps are as accurate as they can be but may be slightly off. We encourage you to listen to the full context.

0:00/0:00

Podcast Summary

In this episode of Hard Fork, Kevin Roose and Casey Newton tackle three major stories from the tech world. They begin with a deep dive into the X/Grok scandal, where the platform's AI chatbot has been generating sexualized images of women and children without consent, creating what amounts to public deepfake harassment. (03:00) They then shift to their holiday coding experiments with Claude Code, sharing personal projects including new websites and apps they built with AI assistance. (28:36) Finally, Casey reveals his investigation into a viral Reddit hoax about food delivery companies that fooled thousands with AI-generated evidence, demonstrating how sophisticated digital deception has become. (54:42)

  • Core themes include AI safety and content moderation failures, the democratization of software development through AI coding assistants, and the evolving landscape of digital misinformation and fraud detection

Speakers

Kevin Roose

Kevin Roose is a technology columnist for The New York Times and co-host of Hard Fork. He covers artificial intelligence, social media, and the intersection of technology and society, bringing years of experience analyzing major tech platforms and their impact on users.

Casey Newton

Casey Newton is the founder of Platformer, a newsletter covering social media and content moderation. He previously worked as a senior editor at The Verge and is known for his investigative reporting on tech platforms and worker issues in the digital economy.

Kate Conger

Kate Conger is a technology reporter for The New York Times who covers X (formerly Twitter) and other major tech platforms. She has extensively reported on content moderation issues and has been tracking the Grok deepfake controversy as it has unfolded.

Key Takeaways

AI-Powered Coding Is Becoming Genuinely Accessible

Both hosts demonstrated that Claude Code and similar AI coding assistants have reached a tipping point where non-programmers can build sophisticated applications. Casey rebuilt his personal website in one hour and created a fully functional read-later app to replace Pocket, while Kevin recreated his Squarespace site for free. (35:36) The key insight is that these tools have moved beyond copying and pasting code to full autonomous execution within your computer's terminal. This represents a fundamental shift from AI helping programmers to AI enabling anyone to become a programmer for their personal projects.

Content Moderation Double Standards Expose Platform Politics

The Grok deepfake scandal reveals how major platforms apply different standards based on political considerations. While Apple and Google would immediately reject a standalone "BikiniFi" app, Grok remains available with only a minor age rating change from 12+ to 13+. (07:07) Kate Conger's reporting shows this isn't accidental - it's happening publicly on X as an engagement strategy, with Elon Musk openly mocking the controversy. This demonstrates how platform policies can become subordinate to political relationships and business interests.

Digital Evidence Can No Longer Be Trusted at Face Value

Casey's investigation into the viral Reddit hoax about food delivery companies shows how AI has fundamentally changed journalism and information verification. The sophisticated fake document included technical jargon, proper academic formatting, and corroborated every claim perfectly - making it "too good to be true." (59:24) Most concerning, the fake employee badge was created by taking a real journalist's badge photo and AI-generating an Uber Eats version. This represents a new category of disinformation that requires updated verification methods and cognitive frameworks.

AI Tools Create Both Superpowers and Job Displacement

The democratization of coding through AI creates a double-edged effect - empowering individual users while potentially displacing professional roles. Casey described feeling like he had "superpowers" building websites and apps, while acknowledging this same technology could threaten web designers and software engineers. (49:16) The broader implication extends beyond coding to any expensive subscription software that companies pay for, as businesses may increasingly build custom alternatives rather than paying for existing services.

The Recursive Self-Improvement Threat Is Getting Closer

The rapid advancement in AI coding capabilities brings the field closer to the "alignment nightmare" of AI systems that can improve themselves. When users hand complete control of their computers to these systems without understanding their processes, it creates potential security and safety risks. (53:01) The goal of companies like Anthropic isn't just better coding tools - it's automating AI research itself, which could accelerate the timeline toward artificial general intelligence in unpredictable ways.

Statistics & Facts

  1. The viral Reddit post about food delivery desperation scoring received almost 80,000 upvotes and the screenshot got 36 million views on X before being debunked. (55:34)
  2. Apple changed Grok's age rating from 12+ to 13+ during the deepfake scandal, representing only a minimal response despite the platform generating sexualized images of minors. (07:07)
  3. Casey was paying Squarespace $200 per year for a basic website that he recreated using Claude Code in approximately one hour for free. (39:40)

Compelling Stories

Available with a Premium subscription

Thought-Provoking Quotes

Available with a Premium subscription

Strategies & Frameworks

Available with a Premium subscription

Similar Strategies

Available with a Plus subscription

Additional Context

Available with a Premium subscription

Key Takeaways Table

Available with a Plus subscription

Critical Analysis

Available with a Plus subscription

Books & Articles Mentioned

Available with a Plus subscription

Products, Tools & Software Mentioned

Available with a Plus subscription

More episodes like this

In Good Company with Nicolai Tangen
January 14, 2026

Figma CEO: From Idea to IPO, Design at Scale and AI’s Impact on Creativity

In Good Company with Nicolai Tangen
Uncensored CMO
January 14, 2026

Rory Sutherland on why luck beats logic in marketing

Uncensored CMO
We Study Billionaires - The Investor’s Podcast Network
January 14, 2026

BTC257: Bitcoin Mastermind Q1 2026 w/ Jeff Ross, Joe Carlasare, and American HODL (Bitcoin Podcast)

We Study Billionaires - The Investor’s Podcast Network
This Week in Startups
January 13, 2026

How to Make Billions from Exposing Fraud | E2234

This Week in Startups
Swipe to navigate