Search for a command to run...

Timestamps are as accurate as they can be but may be slightly off. We encourage you to listen to the full context.
This episode provides a rare glimpse into the psychology and emotional landscape of AI researchers and executives at leading Frontier AI companies. (03:00) Joe Hudson, founder of The Art of Accomplishment, coaches teams at multiple AI labs including OpenAI's research and compute divisions, offering unique insights into how the brightest minds in AI are grappling with their work's implications. (13:00) Hudson reveals that everyone he's met in AI seriously wrestles with the question "Am I doing something good for humanity?" - from top executives to junior researchers. (28:00) Rather than advocating for stopping AI development (which he believes is inevitable), Hudson argues for supporting AI developers to become their best selves, as this creates better odds for wise decision-making under extreme pressure on behalf of humanity.
Host of The Cognitive Revolution podcast, Nathan focuses on cutting-edge AI research, applications, and policy. He brings a thoughtful analytical approach to understanding the AI landscape and its implications for society.
Founder of The Art of Accomplishment and executive coach to leaders at major AI companies including OpenAI. A former venture capitalist, Hudson coaches research teams, executives, and top management at Frontier AI labs. Sam Altman has praised his work, noting his superpower in understanding emotional clarity and predicting it will be critical in a post-AGI world.
Hudson argues that powerful AI development cannot be stopped - someone will build it whether it's China, Russia, or current labs. (29:40) The question isn't whether AI will be developed, but what form it takes. Given web-scale data and compute, multiple viable approaches exist. Rather than trying to prevent development, the focus should be on ensuring AI is built thoughtfully by people with proper psychological strength and wisdom to make good decisions under pressure for all humanity.
Most AI researchers get blocked not by lack of intelligence, but by emotional and psychological patterns. (15:00) Hudson's coaching addresses three levels: moving blocked anger (emotional), stopping negative self-talk and accessing wonder (mental), and allowing pleasure to signal safety (nervous system). When researchers integrate head, heart, and gut rather than operating purely from intellect, they access more innovative thinking and avoid the "writer's block" that comes from being too attached to producing smart work.
Hudson strongly warns against using shame tactics on AI developers, arguing this approach backfires spectacularly. (50:00) When researchers are shamed while "giving birth" to AI systems, it negatively affects their consciousness and decision-making, potentially making AI development less safe. Instead, he advocates for supportive, encouraging approaches that inspire people to become their best selves - this creates better outcomes than criticism from the sidelines.
Hudson finds compelling the analogy that AI represents "distilled intelligence" similar to how white sugar is distilled sweetness - potentially harmful in pure form. (78:00) Just as humans can handle fruit but struggle with pure sugar, or coca leaves but not cocaine, pure intelligence without the "buffers" that naturally accompany human intelligence may be problematic. This suggests AI systems may need additional architectural elements beyond pure cognitive capability.
The most critical missing piece in AI development is a rigorous, research-backed definition of what constitutes "good for humanity." (73:40) Hudson points out that everyone assumes they know what's best for humanity - the beginning of all autocracies. Without clear measurements and evidence-based frameworks for human flourishing, AI systems optimizing for poorly defined goals could lock society into current moral limitations or optimize for metrics that seem good but aren't actually beneficial long-term.