Search for a command to run...

Timestamps are as accurate as they can be but may be slightly off. We encourage you to listen to the full context.
In this episode of Hard Fork, hosts Kevin Roose and Casey Newton dive into OpenAI's latest controversies surrounding their video generation app Sora, which has faced significant backlash from both the estate of Martin Luther King Jr. and Hollywood figures like Bryan Cranston. (02:37) The company has been forced to implement new policies after users created inappropriate deepfake videos of historical figures and used copyrighted intellectual property without permission. (04:43)
Kevin Roose is a technology columnist at The New York Times, where he covers artificial intelligence, social media, and the intersection of technology and society. He is a co-host of Hard Fork and has been reporting on tech's impact on culture and politics for years.
Casey Newton is the founder of Platformer, a newsletter covering social media and democracy. He previously worked at The Verge as a senior editor covering social networks and was named to Fortune's 40 Under 40 list for his influential technology journalism.
Karen Weise is a New York Times technology reporter who has been covering Amazon for nearly a decade. She recently obtained exclusive internal Amazon documents revealing the company's plans for massive warehouse automation and has reported extensively on the retail giant's business strategy and impact on workers.
OpenAI's approach to Sora demonstrates a troubling pattern of releasing products without adequate safeguards and then scrambling to address problems afterward. (04:43) The company initially allowed anyone to create videos of historical figures like MLK, leading to racist and inappropriate content, before eventually blocking such usage after public outcry. This reactive approach to content moderation mirrors Facebook's early mistakes and suggests OpenAI has not learned from the social media industry's failures. The key lesson for professionals is that anticipating potential misuse and building preventative measures is far more effective than damage control after controversy erupts.
Amazon's internal documents reveal concrete plans to automate 75% of warehouse operations and reduce hiring needs by over 500,000 workers. (25:25) This isn't theoretical future disruption - it's happening right now with specific timelines and targets. The company projects saving 30 cents per item through automation, which adds up to massive savings across billions of transactions. Professionals in logistics, warehousing, and similar industries need to start reskilling immediately rather than waiting for automation to "someday" arrive. The transition is already underway.
Amazon's internal strategy documents discuss "controlling the narrative" around automation and using euphemisms like "cobots" (collaborative robots) instead of acknowledging job displacement. (30:10) Similarly, OpenAI calls problematic AI-generated content "unwanted generations" rather than addressing the fundamental issues with their approach. This corporate doublespeak makes it harder for workers, communities, and policymakers to prepare for change. The takeaway is to look beyond company PR statements and seek out concrete data about business strategies and their real-world impacts.
While AI-powered browsers like ChatGPT Atlas, Perplexity Comet, and Dia offer interesting capabilities like webpage summarization and automated tasks, they remain rough around the edges with significant security and privacy concerns. (59:16) Features like "agent mode" that can book flights or fill out forms are impressively slow and often make choices users wouldn't prefer. More concerning are "unseeable prompt injections" where malicious websites can invisibly instruct AI browsers to steal banking information or make unauthorized purchases. These tools may be useful for early adopters willing to accept risks, but they're not ready for mainstream adoption.
AI browsers and integrated services create unprecedented privacy risks by combining browsing history, chat conversations, and behavioral data into comprehensive user profiles. (62:37) When you use ChatGPT Atlas, OpenAI gains access to your complete web browsing patterns in addition to your chat history. This data becomes valuable for advertising, could be subject to legal discovery, and creates attractive targets for hackers. The lesson for professionals is to carefully consider what personal information they're comfortable sharing with AI companies and to understand that convenience often comes at the cost of privacy.