Search for a command to run...

Timestamps are as accurate as they can be but may be slightly off. We encourage you to listen to the full context.
In this episode of the Light Cone podcast, YC partners Diana, Jared, and Harj reflect on the most surprising developments in AI during 2025. The discussion reveals a significant shift in model preferences among YC startups, with Anthropic overtaking OpenAI as the preferred API choice at 52% compared to OpenAI's declining dominance. (00:00) The partners explore how the AI economy has stabilized into clear layers - model companies, application companies, and infrastructure providers - creating more predictable playbooks for AI-native startups. (27:32)
Diana is a Partner at Y Combinator who tracks technology adoption trends and founder behavior patterns across YC batches. She conducts analysis on tech stack selection and model preferences among YC companies, providing insights into startup technology adoption at scale.
Jared is a Partner at Y Combinator with extensive experience in evaluating AI startups and infrastructure companies. He provides strategic guidance to founders building in the AI space and has deep insights into market dynamics and investment patterns in the AI ecosystem.
Harj is a Partner at Y Combinator who personally uses and evaluates various AI models and tools in his daily work. He brings practical experience with AI applications and has insights into consumer behavior and product development in the AI space.
The fierce competition between AI model providers (OpenAI, Anthropic, Google) is commoditizing the model layer, creating tremendous opportunities for application-layer startups. (08:00) This dynamic mirrors historical technology transitions like Intel vs. AMD, where competition at the infrastructure level benefits companies building on top of the platforms. As Jared explains, this competition means lower costs and higher availability for startups, making it an ideal time to build AI-native applications rather than trying to compete directly with the foundational model companies.
Successful AI companies are increasingly building abstraction layers that allow them to swap between different models based on performance for specific tasks. (07:07) Diana describes how Series B companies now use Gemini for context engineering while feeding results into OpenAI for execution, switching models as new releases emerge. This approach, grounded in proprietary evaluations specific to their vertical, allows companies to always use the best-performing model for each task rather than being locked into a single provider.
Fine-tuned models trained on specialized datasets are increasingly beating general-purpose models in specific domains. (20:18) Diana shares examples of YC healthcare companies achieving better performance than OpenAI with only 8 billion parameters by collecting superior domain-specific datasets and applying fine-tuning with reinforcement learning. This trend suggests significant opportunities exist for startups willing to invest in high-quality, vertical-specific data collection and model training.
The massive infrastructure investment in AI - often criticized as a bubble - actually creates unprecedented opportunities for startup founders. (11:30) Jared draws parallels to the telecom bubble that enabled YouTube's existence through excess bandwidth capacity. The current GPU and data center over-investment means startups get access to powerful AI capabilities at low costs without bearing the infrastructure risk themselves, positioning them perfectly for the "deployment phase" when applications proliferate.
What started as an observed behavior among YC founders - using AI for exploratory, experimental coding - has matured into a significant market category with multiple successful companies. (21:06) The evolution from casual observation to companies like Replit, Emergence, and Google's AntiGravity demonstrates how paying attention to founder behaviors can identify emerging market opportunities before they become obvious to the broader market.