Search for a command to run...

Timestamps are as accurate as they can be but may be slightly off. We encourage you to listen to the full context.
In this episode of This Week in Startups, hosts Jason Calacanis, Alex Wilhelm, and Lon Harris dive into the rapidly evolving landscape of AI, media censorship, and emerging technologies. The discussion opens with South Park's return after a controversial week-long absence, featuring an upcoming episode about prediction markets that highlights how deeply these platforms have penetrated mainstream culture. (00:36)
The hosts examine Alibaba's powerful new WAN 2.2 deepfake model that creates eerily realistic face-swapping videos, raising critical questions about content authenticity and the need for robust verification systems. (07:07) They explore California's controversial SB 771 bill that would hold social media platforms liable for algorithmic content promotion, sparking debate about the balance between free speech and platform responsibility.
The episode also covers YouTube's decision to reinstate creators previously banned for COVID-19 and election misinformation, revealing how the Biden administration allegedly pressured tech companies to remove content that didn't violate their policies. (35:38) Additional topics include the growing problem of AI-generated "work slop" in corporate environments, TikTok sale predictions on betting markets, Tether's massive fundraising round, and Stripe's unprecedented share buyback program.
Angel investor, entrepreneur, and host of This Week in Startups and All-In podcasts. Jason is the founder of Launch, an investment firm that has backed companies like Robinhood, Uber, and Thumbtack. He previously founded Weblogs Inc., which was acquired by AOL, and has been a prominent voice in Silicon Valley for over two decades.
Co-host and senior technology journalist with extensive experience covering startups, venture capital, and public markets. Alex brings deep analytical expertise to discussions about emerging technologies, market trends, and corporate strategy.
Co-host and technology analyst who provides critical perspective on policy issues, platform governance, and the intersection of technology with politics and society.
Jason argues that we're witnessing the commoditization of AI models, similar to what happened with web storage and compute power twenty years ago. (18:53) He believes that for 70% of common queries like making sushi rice or planning family trips, consumers won't be able to distinguish between different AI models from major companies. This commoditization will drive down prices and force companies to compete on specialized features rather than general capabilities. The insight suggests that businesses should focus on building unique applications and user experiences on top of these increasingly similar foundation models, rather than trying to differentiate based on the underlying AI technology alone.
The discussion of Alibaba's WAN 2.2 model reveals that deepfake technology has reached near-professional quality, with 80-85% accuracy that will likely reach 100% within two years. (09:36) Jason emphasizes that the solution isn't to try stopping the technology, but to establish robust verification systems. He advocates for a simple rule: "If it's not from my Twitter handle, my LinkedIn, or a URL I own and control, then it's not me." This creates a massive opportunity for established platforms with verified user histories to become trusted sources of authentic content, while also highlighting the importance of blockchain-based identity verification systems.
Rather than supporting broad censorship legislation like California's SB 771, Jason proposes a "bring your own algorithm" (BYOA) approach. (23:37) He argues that if platforms provide algorithmic transparency and choice - letting users select different algorithms or turn them off entirely - they should maintain their Section 230 protections. However, platforms that use singular, black-box algorithms should face liability for promoting harmful content. This framework would empower users to make informed choices about their content consumption while holding platforms accountable only when they make opaque editorial decisions through their algorithms.
The Harvard and Stanford research revealing that 40% of workers have encountered "work slop" - AI-generated content that appears professional but lacks substance - highlights a critical workplace challenge. (48:43) Jason shares personal examples of team members submitting AI-generated content that was nonsensical when read aloud. The key insight is that work slop not only wastes time requiring corrections, but also damages professional relationships, with 32% of people saying they're less likely to work with colleagues who submit such content. Organizations need to establish clear guidelines about AI use and emphasize that writing and note-taking by hand helps with retention and understanding.
Jason predicts a massive shift toward stablecoins replacing traditional payment methods like PayPal, Venmo, and credit cards. (54:58) He envisions scenarios where merchants offer discounts for stablecoin payments to avoid 2-4% credit card fees, similar to current cash discounts. This transition would be "massively deflationary" and beneficial for consumers, while threatening credit card companies' revenue models. The insight suggests that businesses should prepare for this payment revolution by understanding stablecoin integration, while investors should consider being long on stablecoin companies and short on traditional payment processors like Visa and Mastercard.