Command Palette

Search for a command to run...

PodMine
Y Combinator Startup Podcast
Y Combinator Startup Podcast•October 7, 2025

Every AI Founder Should Be Asking These Questions

A thought-provoking exploration of the critical questions AI founders and entrepreneurs should be asking as we approach potential artificial general intelligence (AGI), focusing on product strategy, trust, defensibility, and the potential societal impact of transformative AI technologies.
AI & Machine Learning
Indie Hackers & SaaS Builders
Tech Policy & Ethics
Jordan Fisher
Y Combinator
Anthropic
Solo Monologue
Deep Dive

Summary Sections

  • Podcast Summary
  • Speakers
  • Key Takeaways
  • Statistics & Facts
  • Compelling StoriesPremium
  • Thought-Provoking QuotesPremium
  • Strategies & FrameworksPremium
  • Similar StrategiesPlus
  • Additional ContextPremium
  • Key Takeaways TablePlus
  • Critical AnalysisPlus
  • Books & Articles MentionedPlus
  • Products, Tools & Software MentionedPlus
0:00/0:00

Timestamps are as accurate as they can be but may be slightly off. We encourage you to listen to the full context.

0:00/0:00

Podcast Summary

In this thought-provoking talk, Jordan Fisher, who leads an alignment research team at Anthropic, shares his profound confusion about the rapidly evolving AI landscape and presents a series of critical questions that startup founders should be asking themselves. (00:22) Fisher admits he's "more confused than ever" after decades in technology, unable to predict even five years into the future when he previously could anticipate trends a decade out.

The central thesis revolves around preparing for AGI's likely arrival within 2-3 years and how this should fundamentally reshape every aspect of building and running a startup. (03:56) Fisher argues that founders should be "planning your company and your strategy around this fact" of AGI's imminent arrival, not just optimizing for the next six months.

The discussion spans critical areas including product strategy, team dynamics, trust and alignment, economic viability, and the potential commoditization of software development. (32:24) Fisher emphasizes that "being impact oriented is really important" and challenges founders to think beyond just making money to consider what society truly needs during this unprecedented transition.

• Main Theme: Navigating startup strategy and product development in preparation for AGI's arrival within the next 2-3 years, while addressing fundamental questions about trust, defensibility, and societal impact in an AI-dominated future.

Speakers

Jordan Fisher

Jordan Fisher leads an alignment research team at Anthropic and brings extensive startup experience through Y Combinator and multiple ventures throughout his career. (01:37) He has spent his entire career in technology with a track record of successfully anticipating and capitalizing on major tech trends, founding companies and planning careers around emerging technologies. Fisher combines deep technical AI expertise with practical startup experience, making him uniquely positioned to address the intersection of AGI development and entrepreneurship.

Key Takeaways

Plan for AGI Arrival Within 2-3 Years, Not Just Six Months

While conventional wisdom suggests planning AI product development 6 months ahead based on expected foundation model capabilities, Fisher argues this is insufficient. (03:47) He states founders should be "planning two years in advance because it's extremely likely that we will have AGI in the next few years." This isn't about creating rigid long-term plans, but rather ensuring every aspect of your startup - from hiring to marketing to go-to-market strategy - considers how AGI will fundamentally reshape these functions. The key insight is that both the supply side (startups building AI products) and demand side (enterprises adopting them) will be transformed simultaneously, creating unprecedented market dynamics that require deeper strategic thinking than typical startup planning cycles.

Defensibility Through Hard Problems Will Be Critical for Survival

Traditional startup moats may evaporate when AGI can replicate most software with simple prompts. (22:14) Fisher poses the critical question: "In two years or three years, if I can just prompt Claude seven or GPT seven to just replicate your startup, what's your advantage gonna be?" The solution lies in tackling genuinely hard problems that will remain difficult even in a post-AGI world. Fisher identifies infrastructure, energy, manufacturing, and semiconductor fabrication as examples where tacit knowledge and physical constraints create lasting advantages. (19:32) He notes that companies like TSMC and ASML have "decades of data" and "tacit knowledge locked up" that hasn't leaked into public training datasets, making these domains naturally defensible against AI replication.

Trust Will Become the Ultimate Competitive Advantage

As AI capabilities advance and team sizes shrink, traditional trust mechanisms within companies may break down. (13:45) Fisher explains that historically "we trust companies today because they're composed of a diversity of people" who can act as internal checks against bad decisions. However, in semi-automated teams, "a single person could make a decision that changes the entire impact of a product" without oversight. This creates both a massive challenge and opportunity: companies that can credibly demonstrate trustworthiness through mechanisms like AI-powered audits, binding commitments, and transparent operations will have unprecedented competitive advantages. Users desperately want "agents they can trust" and "bots they can trust," making trust a scarce and valuable commodity in an AI-saturated market.

The Economic Pressure for Alignment Creates Near-Term Opportunities

While alignment is often discussed as a long-term existential challenge, Fisher identifies immediate economic drivers for alignment progress. (17:52) He argues there's "extremely high pressure" to solve alignment "just to make these models more economically viable" and enable longer-horizon agents. Current AI works effectively for 5-minute tasks with human oversight, but scaling to day-long or week-long autonomous operation requires substantially better alignment and reliability. This creates a business opportunity for startups that can solve practical alignment challenges, as "long horizon agents require it." Companies that crack reliable, trustworthy AI behavior will have significant advantages in building valuable autonomous systems.

This May Be Your Last Opportunity to Build Something Meaningful

Fisher delivers a sobering perspective on the historical moment we're in. (27:44) He warns that "this might be the last product you build. This might be the last company you build" and urges founders to use this potentially final opportunity to create genuine impact. While he acknowledges the natural fear driving people to focus solely on making money before economic disruption, he argues this moment represents "the last opportunity that we might have to make a difference, to change the world." Rather than just building "something people will consume," Fisher challenges founders to consider "what does society need?" The combination of unprecedented technological capability and potential finality creates both urgency and responsibility for building products that serve humanity's long-term interests, not just short-term engagement.

Statistics & Facts

  1. Fisher has been able to predict technology trends 5-10 years in advance throughout his career, but now can only see "three weeks or less" into the future due to AI's rapid pace of change. (00:42) This represents a dramatic compression of predictable planning horizons for technology leaders.
  2. AGI is expected to arrive within the next 2-3 years according to Fisher's assessment, requiring startups to fundamentally restructure their strategic planning beyond typical 6-month AI development cycles. (03:56)
  3. Current AI models require human intervention approximately every 5 minutes for code generation tasks, but economic viability demands extending this to day-long or week-long autonomous operation periods. (18:12)

Compelling Stories

Available with a Premium subscription

Thought-Provoking Quotes

Available with a Premium subscription

Strategies & Frameworks

Available with a Premium subscription

Similar Strategies

Available with a Plus subscription

Additional Context

Available with a Premium subscription

Key Takeaways Table

Available with a Plus subscription

Critical Analysis

Available with a Plus subscription

Books & Articles Mentioned

Available with a Plus subscription

Products, Tools & Software Mentioned

Available with a Plus subscription

More episodes like this

In Good Company with Nicolai Tangen
January 14, 2026

Figma CEO: From Idea to IPO, Design at Scale and AI’s Impact on Creativity

In Good Company with Nicolai Tangen
We Study Billionaires - The Investor’s Podcast Network
January 14, 2026

BTC257: Bitcoin Mastermind Q1 2026 w/ Jeff Ross, Joe Carlasare, and American HODL (Bitcoin Podcast)

We Study Billionaires - The Investor’s Podcast Network
Uncensored CMO
January 14, 2026

Rory Sutherland on why luck beats logic in marketing

Uncensored CMO
This Week in Startups
January 13, 2026

How to Make Billions from Exposing Fraud | E2234

This Week in Startups
Swipe to navigate