Search for a command to run...

Timestamps are as accurate as they can be but may be slightly off. We encourage you to listen to the full context.
In this thought-provoking talk, Jordan Fisher, who leads an alignment research team at Anthropic, shares his profound confusion about the rapidly evolving AI landscape and presents a series of critical questions that startup founders should be asking themselves. (00:22) Fisher admits he's "more confused than ever" after decades in technology, unable to predict even five years into the future when he previously could anticipate trends a decade out.
The central thesis revolves around preparing for AGI's likely arrival within 2-3 years and how this should fundamentally reshape every aspect of building and running a startup. (03:56) Fisher argues that founders should be "planning your company and your strategy around this fact" of AGI's imminent arrival, not just optimizing for the next six months.
The discussion spans critical areas including product strategy, team dynamics, trust and alignment, economic viability, and the potential commoditization of software development. (32:24) Fisher emphasizes that "being impact oriented is really important" and challenges founders to think beyond just making money to consider what society truly needs during this unprecedented transition.
• Main Theme: Navigating startup strategy and product development in preparation for AGI's arrival within the next 2-3 years, while addressing fundamental questions about trust, defensibility, and societal impact in an AI-dominated future.Jordan Fisher leads an alignment research team at Anthropic and brings extensive startup experience through Y Combinator and multiple ventures throughout his career. (01:37) He has spent his entire career in technology with a track record of successfully anticipating and capitalizing on major tech trends, founding companies and planning careers around emerging technologies. Fisher combines deep technical AI expertise with practical startup experience, making him uniquely positioned to address the intersection of AGI development and entrepreneurship.
While conventional wisdom suggests planning AI product development 6 months ahead based on expected foundation model capabilities, Fisher argues this is insufficient. (03:47) He states founders should be "planning two years in advance because it's extremely likely that we will have AGI in the next few years." This isn't about creating rigid long-term plans, but rather ensuring every aspect of your startup - from hiring to marketing to go-to-market strategy - considers how AGI will fundamentally reshape these functions. The key insight is that both the supply side (startups building AI products) and demand side (enterprises adopting them) will be transformed simultaneously, creating unprecedented market dynamics that require deeper strategic thinking than typical startup planning cycles.
Traditional startup moats may evaporate when AGI can replicate most software with simple prompts. (22:14) Fisher poses the critical question: "In two years or three years, if I can just prompt Claude seven or GPT seven to just replicate your startup, what's your advantage gonna be?" The solution lies in tackling genuinely hard problems that will remain difficult even in a post-AGI world. Fisher identifies infrastructure, energy, manufacturing, and semiconductor fabrication as examples where tacit knowledge and physical constraints create lasting advantages. (19:32) He notes that companies like TSMC and ASML have "decades of data" and "tacit knowledge locked up" that hasn't leaked into public training datasets, making these domains naturally defensible against AI replication.
As AI capabilities advance and team sizes shrink, traditional trust mechanisms within companies may break down. (13:45) Fisher explains that historically "we trust companies today because they're composed of a diversity of people" who can act as internal checks against bad decisions. However, in semi-automated teams, "a single person could make a decision that changes the entire impact of a product" without oversight. This creates both a massive challenge and opportunity: companies that can credibly demonstrate trustworthiness through mechanisms like AI-powered audits, binding commitments, and transparent operations will have unprecedented competitive advantages. Users desperately want "agents they can trust" and "bots they can trust," making trust a scarce and valuable commodity in an AI-saturated market.
While alignment is often discussed as a long-term existential challenge, Fisher identifies immediate economic drivers for alignment progress. (17:52) He argues there's "extremely high pressure" to solve alignment "just to make these models more economically viable" and enable longer-horizon agents. Current AI works effectively for 5-minute tasks with human oversight, but scaling to day-long or week-long autonomous operation requires substantially better alignment and reliability. This creates a business opportunity for startups that can solve practical alignment challenges, as "long horizon agents require it." Companies that crack reliable, trustworthy AI behavior will have significant advantages in building valuable autonomous systems.
Fisher delivers a sobering perspective on the historical moment we're in. (27:44) He warns that "this might be the last product you build. This might be the last company you build" and urges founders to use this potentially final opportunity to create genuine impact. While he acknowledges the natural fear driving people to focus solely on making money before economic disruption, he argues this moment represents "the last opportunity that we might have to make a difference, to change the world." Rather than just building "something people will consume," Fisher challenges founders to consider "what does society need?" The combination of unprecedented technological capability and potential finality creates both urgency and responsibility for building products that serve humanity's long-term interests, not just short-term engagement.