Search for a command to run...

Timestamps are as accurate as they can be but may be slightly off. We encourage you to listen to the full context.
In this compelling episode, veteran financial analyst Dr. Michael Power presents a groundbreaking analysis that challenges the prevailing narrative around AI dominance. Dr. Power argues that China's AI architecture, built on open-source principles and distributed intelligence, has significant structural advantages over the US model's closed-source, service-monetization approach. (04:00) He predicts that within three years, China's fundamentally different AI philosophy—treating AI as a utility like electricity rather than a monetizable service—will outmaneuver the United States' approach. (02:43)
The discussion reveals that while the US AI ecosystem is valued at approximately $15 trillion across public and private markets, the Chinese approach focuses on creating free, open-weight models that can be widely distributed and customized. (03:16) Dr. Power warns of a technological bubble in US AI, driven by what he calls the "three assassins" of Moore's Law: physics, material science, and economics, which are making chip miniaturization increasingly difficult and expensive.
Dr. Michael Power is a seasoned financial analyst, consultant, and strategist with Kuskase Consulting. A veteran of macro strategy, he has extensive experience analyzing global economic and technological trends. Currently in semi-retirement, Dr. Power has dedicated significant time to understanding AI architectures and their geopolitical implications, producing in-depth research that challenges conventional Wall Street thinking about AI dominance.
Jack Farley is the host of Monetary Matters, a podcast focused on macroeconomic analysis and financial markets. He brings expertise in finance and economics to discussions of complex technological and geopolitical issues affecting global markets.
Dr. Power emphasizes that the essence of China's AI approach is that "it's free," representing a completely different philosophy from the US model. (04:14) China is building AI as a utility like electricity, where value derives from using the electricity rather than selling it as a premium service. This mirrors successful open-source models like Android and Linux, where Android runs on most smartphones globally except in four countries (US, Canada, UK, Sweden), and Linux powers 100% of the world's top 100 supercomputers. (06:01) This approach gives Chinese AI models a "longer runway" and cheaper pathway to scale compared to US closed-source models that must generate sufficient revenue to justify massive capital expenditures.
Dr. Power identifies physics, material science, and economics as the "three assassins" undermining continued chip miniaturization. (40:48) At extremely small scales (approaching 2-3 nanometers), electrons can "slip through" and turn switches from definitive on/off states into "maybes," compromising chip reliability. Additionally, materials begin to degrade under extreme heat and miniaturization pressures. (46:12) Economically, the cost of moving from 3nm to 2nm becomes prohibitively expensive without proportional increases in usable compute power. China responds by "thinking smarter, not smaller," building "cognitive towers" that stack multiple chip layers and memories rather than pursuing ever-smaller individual chips.
DeepSeek's recent technical paper reveals a breakthrough in memory architecture (MLA) that reduces memory requirements by 93% - from 100% to just 7% - while maintaining performance. (32:01) This innovation, released on New Year's Eve 2025, addresses "catastrophic forgetfulness" in AI training, allowing models to accumulate and organize information more efficiently as they scale. (39:00) The breakthrough demonstrates that China achieved comparable or superior results to US models while spending a fraction of the capital (potentially $1.5 billion versus OpenAI's $19 billion contribution to Stargate alone). This validates that architectural innovation can deliver better performance per dollar than brute-force scaling approaches.
Eric Schmidt identified electricity, not chips, as AI's primary constraint, and this heavily favors China's approach. (48:48) The US faces severe power shortages in data center hubs like Northern Virginia, while China has invested massively in renewable energy capacity with falling energy costs. (49:48) China's distributed intelligence model requires far less power than concentrated intelligence approaches like Stargate. At US AI conferences, power supply concerns dominate discussions, while Chinese conferences rarely mention power constraints. OpenAI needs to increase energy capacity by 125 times over eight years - a compounding requirement that Dr. Power compares to the impossible rice-on-chessboard problem.
US hyperscalers (Google, Amazon, Microsoft, Meta) are caught in a "Red Queen dilemma" where they must "run twice as fast just to stay in the same position." (105:46) None can stop spending on AI infrastructure because rivals would surge ahead, creating collective overinvestment that "guarantees systemic collapse." (105:03) Dr. Power notes that when Meta released its latest earnings with massive CapEx plans, the stock dropped 20% as investors questioned the sustainability. This dynamic mirrors the dot-com bubble, where some companies like Amazon survived (despite 80% stock declines) by finding new business models like cloud computing. The key question becomes identifying which current AI companies will be survivors versus casualties.