Command Palette

Search for a command to run...

PodMine
Y Combinator Startup Podcast
Y Combinator Startup Podcast•December 22, 2025

What Surprised Us Most In 2025

YC partners reflect on 2025's AI landscape, highlighting stabilization, shifting model dominance with Anthropic and Gemini gaining ground, and the promising potential for AI startups in the deployment phase of technological innovation.
Startup Founders
AI & Machine Learning
Developer Culture
B2B SaaS Business
Sundar Pichai
Sergey Brin
Tom Brown
Diana Hu

Summary Sections

  • Podcast Summary
  • Speakers
  • Key Takeaways
  • Statistics & Facts
  • Compelling StoriesPremium
  • Thought-Provoking QuotesPremium
  • Strategies & FrameworksPremium
  • Similar StrategiesPlus
  • Additional ContextPremium
  • Key Takeaways TablePlus
  • Critical AnalysisPlus
  • Books & Articles MentionedPlus
  • Products, Tools & Software MentionedPlus
0:00/0:00

Timestamps are as accurate as they can be but may be slightly off. We encourage you to listen to the full context.

0:00/0:00

Podcast Summary

In this episode of the Light Cone podcast, YC partners Diana, Jared, and Harj reflect on the most surprising developments in AI during 2025. The discussion reveals a significant shift in model preferences among YC startups, with Anthropic overtaking OpenAI as the preferred API choice at 52% compared to OpenAI's declining dominance. (00:00) The partners explore how the AI economy has stabilized into clear layers - model companies, application companies, and infrastructure providers - creating more predictable playbooks for AI-native startups. (27:32)

  • Core themes include the commoditization of AI models leading to increased opportunities at the application layer, the infrastructure build-out challenges driving innovation in space-based data centers and fusion energy, and the maturation of the AI startup ecosystem from chaotic pivoting to more traditional startup difficulty levels.

Speakers

Diana Hu

Diana is a Partner at Y Combinator who tracks technology adoption trends and founder behavior patterns across YC batches. She conducts analysis on tech stack selection and model preferences among YC companies, providing insights into startup technology adoption at scale.

Jared Friedman

Jared is a Partner at Y Combinator with extensive experience in evaluating AI startups and infrastructure companies. He provides strategic guidance to founders building in the AI space and has deep insights into market dynamics and investment patterns in the AI ecosystem.

Harj Taggar

Harj is a Partner at Y Combinator who personally uses and evaluates various AI models and tools in his daily work. He brings practical experience with AI applications and has insights into consumer behavior and product development in the AI space.

Key Takeaways

Model Competition Creates Application Layer Opportunities

The fierce competition between AI model providers (OpenAI, Anthropic, Google) is commoditizing the model layer, creating tremendous opportunities for application-layer startups. (08:00) This dynamic mirrors historical technology transitions like Intel vs. AMD, where competition at the infrastructure level benefits companies building on top of the platforms. As Jared explains, this competition means lower costs and higher availability for startups, making it an ideal time to build AI-native applications rather than trying to compete directly with the foundational model companies.

Build Orchestration Layers for Model Flexibility

Successful AI companies are increasingly building abstraction layers that allow them to swap between different models based on performance for specific tasks. (07:07) Diana describes how Series B companies now use Gemini for context engineering while feeding results into OpenAI for execution, switching models as new releases emerge. This approach, grounded in proprietary evaluations specific to their vertical, allows companies to always use the best-performing model for each task rather than being locked into a single provider.

Domain-Specific Models Can Outperform General Models

Fine-tuned models trained on specialized datasets are increasingly beating general-purpose models in specific domains. (20:18) Diana shares examples of YC healthcare companies achieving better performance than OpenAI with only 8 billion parameters by collecting superior domain-specific datasets and applying fine-tuning with reinforcement learning. This trend suggests significant opportunities exist for startups willing to invest in high-quality, vertical-specific data collection and model training.

The "AI Bubble" Benefits Application Developers

The massive infrastructure investment in AI - often criticized as a bubble - actually creates unprecedented opportunities for startup founders. (11:30) Jared draws parallels to the telecom bubble that enabled YouTube's existence through excess bandwidth capacity. The current GPU and data center over-investment means startups get access to powerful AI capabilities at low costs without bearing the infrastructure risk themselves, positioning them perfectly for the "deployment phase" when applications proliferate.

Vibe Coding Evolved from Observation to Major Category

What started as an observed behavior among YC founders - using AI for exploratory, experimental coding - has matured into a significant market category with multiple successful companies. (21:06) The evolution from casual observation to companies like Replit, Emergence, and Google's AntiGravity demonstrates how paying attention to founder behaviors can identify emerging market opportunities before they become obvious to the broader market.

Statistics & Facts

  1. Anthropic has overtaken OpenAI as the preferred API choice among YC companies at 52%, while OpenAI's dominance has declined from 90%+ to below 50%. (01:29) This represents a dramatic shift in just the last 3-6 months, with Anthropic showing hockey stick growth from around 25% market share.
  2. Gemini has climbed from single-digit percentage (around 3%) to 23% adoption among YC companies. (03:29) This growth reflects Google's improved model quality and integration with real-time information through their search infrastructure.
  3. Gamma achieved $100 million ARR with only 50 employees, representing a new benchmark for revenue per employee efficiency in AI companies. (29:38) This metric has become a "reverse flex" showing operational efficiency rather than traditional hiring growth.

Compelling Stories

Available with a Premium subscription

Thought-Provoking Quotes

Available with a Premium subscription

Strategies & Frameworks

Available with a Premium subscription

Similar Strategies

Available with a Plus subscription

Additional Context

Available with a Premium subscription

Key Takeaways Table

Available with a Plus subscription

Critical Analysis

Available with a Plus subscription

Books & Articles Mentioned

Available with a Plus subscription

Products, Tools & Software Mentioned

Available with a Plus subscription

More episodes like this

In Good Company with Nicolai Tangen
January 14, 2026

Figma CEO: From Idea to IPO, Design at Scale and AI’s Impact on Creativity

In Good Company with Nicolai Tangen
We Study Billionaires - The Investor’s Podcast Network
January 14, 2026

BTC257: Bitcoin Mastermind Q1 2026 w/ Jeff Ross, Joe Carlasare, and American HODL (Bitcoin Podcast)

We Study Billionaires - The Investor’s Podcast Network
This Week in Startups
January 13, 2026

How to Make Billions from Exposing Fraud | E2234

This Week in Startups
Moonshots with Peter Diamandis
January 13, 2026

Tony Robbins on Overcoming Job Loss, Purposelessness & The Coming AI Disruption | 222

Moonshots with Peter Diamandis
Swipe to navigate