Command Palette

Search for a command to run...

PodMine
"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis
"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis•October 18, 2025

Chinese AI – They're Just Like Us? With Beijing-Based Concordia AI CEO Brian Tse

A deep exploration of China's approach to AI development reveals surprising similarities with the West, challenging assumptions about technological rivalry and highlighting shared concerns around safety, governance, and responsible innovation.

Summary Sections

  • Podcast Summary
  • Speakers
  • Key Takeaways
  • Statistics & Facts
  • Compelling StoriesPremium
  • Thought-Provoking QuotesPremium
  • Strategies & FrameworksPremium
  • Similar StrategiesPlus
  • Additional ContextPremium
  • Key Takeaways TablePlus
  • Critical AnalysisPlus
  • Books & Articles MentionedPlus
  • Products, Tools & Software MentionedPlus
0:00/0:00

Timestamps are as accurate as they can be but may be slightly off. We encourage you to listen to the full context.

0:00/0:00

Podcast Summary

This episode features an in-depth conversation with Brian Tse, founder and CEO of Concordia AI, a Beijing-based social enterprise working to advance global AI safety and governance. The discussion provides a comprehensive overview of China's AI development landscape, safety initiatives, and governance frameworks. (00:00) Brian shares insights from his unique position bridging Eastern and Western AI communities, having worked with organizations from Google DeepMind to OpenAI before founding Concordia AI. (07:44)

• The conversation explores the surprising similarities between Chinese and Western approaches to AI development, safety concerns, and governance frameworks, challenging common assumptions about an adversarial "race to AGI" between superpowers

Speakers

Brian Tse

Brian Tse is the founder and CEO of Concordia AI, a Beijing-based social enterprise focused on advancing global AI safety and governance. He began his AI journey at Tsinghua University during the deep learning revolution and later became a policy affiliate with the Centre for the Governance of AI at the University of Oxford. He has served as a senior advisor to the Partnership on AI, consulted with OpenAI around 2019 on societal implications of early large language models like GPT-2, and worked with the Beijing Academy of AI to develop one of the first AI ethics principles from Chinese institutions. His educational and professional background spans both Eastern and Western AI ecosystems, giving him a unique perspective on global AI governance.

Key Takeaways

China's Pragmatic Approach to AI Development

Unlike Western discourse focused on AGI timelines and intelligence explosions, Chinese AI development emphasizes practical applications and economic integration. (27:00) The Chinese government's AI Plus initiative released in August 2025 outlines six pillars for AI deployment, focusing on scientific discovery, industrial transformation, consumption enhancement, human collaboration, government efficiency, and international cooperation. Notably absent from this comprehensive blueprint is any mention of AGI or superintelligence, suggesting China's "AI race" is more about integrating AI into the real economy rather than achieving AGI supremacy. (34:03) This grounded approach may actually represent a more sustainable and less risky path to AI development.

Stronger AI Safety Governance Than Commonly Perceived

China has implemented more comprehensive AI safety regulations than many Western observers realize. (53:24) The country requires pre-deployment testing against 31 risk categories for consumer-facing AI systems, with a 96% safety threshold requirement and government pre-approval before public release. Over 500 AI systems have been registered through this process over the past two years. Additionally, AI safety has been elevated to a top national security concern, with discussions at the highest levels of government including the Politburo study sessions. This regulatory framework is notably more structured than current U.S. federal requirements, though similar to voluntary practices by leading American companies.

Remarkable Similarities in Safety Research and Concerns

Chinese AI safety research remarkably mirrors Western approaches, covering the same risk categories including loss of control, CBRN risks, and large-scale manipulation. (90:04) Concordia AI's collaboration with Shanghai AI Lab produced a frontier AI risk management framework that closely resembles frameworks from leading U.S. labs and EU practices. Over 30 Chinese research groups now focus on frontier AI safety, publishing papers on scalable oversight, mechanistic interpretability, and other advanced safety topics. This convergence suggests a shared global understanding of AI risks rather than divergent safety priorities.

Open Source as a Safety and Trust-Building Strategy

China's commitment to open-sourcing advanced AI models serves multiple strategic purposes beyond mere competition. (105:18) Companies like DeepSeek, Qwen, and others release their most advanced models openly, creating transparency that enables third-party auditing and research. This openness challenges assumptions about secretive AI development and provides a foundation for building international trust. The strategy aligns with China's broader vision of AI as an international public good that should benefit humanity, particularly the global south, while also demonstrating confidence in their safety measures.

Energy Abundance as a Technological Advantage

China's massive energy infrastructure expansion provides a unique competitive advantage in AI development that may offset chip limitations. (117:06) The country has added capacity equal to the entire U.S. grid in the last decade alone, with significant renewable energy installations. This energy abundance allows China to prioritize scale over power density in their AI infrastructure, potentially compensating for less efficient chips through sheer computational volume. Combined with advances in training efficiency, as demonstrated by DeepSeek R1's impressive performance achieved with only 500 H100 chips, this suggests alternative pathways to AI capabilities that don't rely solely on cutting-edge hardware.

Statistics & Facts

  1. Over 500 AI systems and different model versions have been registered through China's pre-deployment testing process over the last two years, with companies required to achieve at least a 96% safety threshold on government-administered tests. (73:46)
  2. China has added energy capacity equal to the entire U.S. electrical grid in the last decade alone, providing significant advantages for large-scale AI infrastructure deployment despite potential chip limitations. (117:12)
  3. More than 30 AI research groups in China now focus on frontier AI safety research as of 2025, representing a dramatic expansion from primarily basic safety work like RLHF and jailbreak defense to advanced topics like scalable oversight and mechanistic interpretability. (153:13)

Compelling Stories

Available with a Premium subscription

Thought-Provoking Quotes

Available with a Premium subscription

Strategies & Frameworks

Available with a Premium subscription

Similar Strategies

Available with a Plus subscription

Additional Context

Available with a Premium subscription

Key Takeaways Table

Available with a Plus subscription

Critical Analysis

Available with a Plus subscription

Books & Articles Mentioned

Available with a Plus subscription

Products, Tools & Software Mentioned

Available with a Plus subscription