Search for a command to run...

Timestamps are as accurate as they can be but may be slightly off. We encourage you to listen to the full context.
In this episode of Core Memory, host Ashley Vance interviews Misha Laskin, co-founder and CEO of Reflection AI, a company that recently raised $2 billion to develop open-source AI models. (01:26) The conversation centers around the critical need for American-developed open AI models to compete with Chinese offerings like DeepSeek, which shocked the world in December 2024 with its powerful yet cost-effective approach. (10:14) Laskin argues that while closed models work well for startups and smaller companies, large enterprises increasingly want to own and customize their AI infrastructure, creating a strategic opportunity for open intelligence platforms.
Misha Laskin is the co-founder and CEO of Reflection AI, which recently raised $2 billion at an $8 billion valuation to develop frontier open AI models. Born in Russia and raised in Washington State, Laskin studied theoretical physics at Yale and completed his PhD at the University of Chicago. (52:21) He worked as a researcher at DeepMind, contributing to the development of Gemini 1.5, before founding Reflection AI to build what he calls "frontier open intelligence" that can compete with closed models from companies like OpenAI and Anthropic.
Ashley Vance is the host of the Core Memory podcast and a technology journalist known for his coverage of innovative companies and emerging technologies. He has extensive experience covering the open source software movement and has written about major tech companies and their evolution over decades.
Large enterprises are increasingly demanding ownership and control over their AI infrastructure rather than relying on closed API services. (16:52) As Laskin explains, once artificial intelligence becomes a significant line item and core part of a company's intellectual property, organizations want to optimize costs, run models on their own infrastructure, keep data secure, and customize capabilities. This mirrors how many enterprises have "repatriated" their cloud workloads after bills became too large, bringing computing back in-house for cost and control reasons. (19:50) For professionals, this represents a massive shift in how AI services will be delivered and consumed, moving from the current "rent-it" model to an "own-it" approach for sophisticated users.
DeepSeek's breakthrough in December 2024 demonstrated that Chinese companies could create powerful AI models at a fraction of the cost using innovative techniques, fundamentally changing the competitive landscape. (22:36) What made this particularly concerning was not just the technical achievement, but the strategic implications: Chinese open models are becoming the default choice globally, potentially establishing China as the primary exporter of AI technology. (29:52) Laskin warns this could result in America "winning the battle but losing the war" - having successful closed model businesses domestically while China sets the global standard for open AI. This has profound implications for professionals working in AI, as the foundational technologies they build upon may increasingly originate from Chinese labs rather than American ones.
The most significant breakthroughs in AI history - from deep neural networks to self-attention mechanisms - emerged through open scientific research rather than closed corporate labs. (08:02) Laskin argues that concentrating safety research within a small number of closed labs, with perhaps only 100 people globally having real influence over AI safety, is fundamentally dangerous. (35:22) True safety requires diverse participation from the entire research community, which is only possible when models and research are openly available. For professionals in AI safety and research, this suggests that the current trend toward closed development may actually increase risks rather than reduce them, as it prevents the broad scientific collaboration necessary to understand and mitigate potential dangers.
One of the most surprising aspects of modern AI development is that the majority of meaningful innovations occur at the infrastructure optimization level rather than through pure theoretical advances. (27:16) DeepSeek's success came largely from cleverly coupling their algorithms to work perfectly with their available hardware, demonstrating that practical engineering solutions often matter more than abstract mathematical breakthroughs. (27:38) This insight is crucial for professionals entering AI research, as it suggests that success requires not just theoretical knowledge but deep expertise in hardware-software optimization, distributed computing, and systems engineering. The field rewards those who can bridge the gap between theoretical understanding and practical implementation.
The progression from basic language models to sophisticated agents capable of autonomous work has been powered primarily by advances in reinforcement learning techniques. (71:16) Laskin notes the rapid evolution from "vibe coding" - where humans remained deeply involved in directing AI assistance - to fully autonomous agents that can complete substantial coding tasks independently. (71:25) Within just twelve months, systems progressed from solving basic math problems to winning International Mathematical Olympiad gold medals, suggesting we're approaching "jagged superintelligence" in narrow domains. For professionals, this indicates that the shift from AI as a tool to AI as an autonomous collaborator is happening faster than many anticipated, requiring rapid adaptation of workflows and skill development.