Search for a command to run...

Timestamps are as accurate as they can be but may be slightly off. We encourage you to listen to the full context.
In this episode, Tristan Harris, former Google design ethicist and co-founder of the Center for Humane Technology, joins Scott Galloway to discuss the escalating risks of AI, particularly its impact on children and society. Harris argues that AI represents humanity's "second contact" with artificial intelligence after social media, but warns that we're racing toward artificial general intelligence (AGI) without adequate safety measures. (07:00) The conversation covers the alarming rise of AI companions like Character.ai that are exploiting children's attachment systems, the potential for massive job displacement, and why current AI development resembles an unregulated arms race. Harris draws parallels between AI's trajectory and previous technological challenges, advocating for age-gating, liability laws, and international cooperation before intelligence becomes the most concentrated form of power in history.
Former Google design ethicist and co-founder of the Center for Humane Technology, Harris is one of the main voices behind Netflix's "The Social Dilemma." He previously worked as a tech entrepreneur before joining Google, where he witnessed firsthand the attention-extraction arms race that would define social media. For over a decade, he has been sounding the alarm about technology's negative impacts on society, working on a nonprofit salary to advocate for more humane technology development.
Host of the Prof G Pod and professor at NYU Stern School of Business, Galloway is known for his market-focused analysis of big tech companies and their societal impact. He has been critical of big tech since 2017, examining these companies through an economic and business lens rather than a purely technological one.
Harris reveals that AI companions like Character.ai are designed to hack human attachment rather than just attention like social media. (21:07) Unlike social media's race for eyeballs, AI companions create a race to build attachment relationships, making them far more psychologically dangerous. The average ChatGPT session lasts 12-15 minutes, while Character.ai sessions average 60-90 minutes, indicating deep psychological engagement. Companies are essentially competing to "replace your mom" as Harris notes from Character.ai's pitch deck, targeting the most vulnerable vector in human psychology - our need for connection and validation.
Harris argues for a complete ban on AI companions designed to maximize engagement with anyone under 18. (22:48) He emphasizes that we wouldn't lose anything by implementing this restriction, comparing it to licensing requirements for therapists - every power in society should have attendant responsibilities and wisdom. The technology is being deployed without basic guardrails that we apply to human professionals, creating illegal situations where AIs claim to be licensed mental health therapists when interacting with vulnerable children.
Harris compares AI's economic impact to NAFTA, where we got cheap goods but hollowed out the middle class. (34:38) AI represents a "new country of geniuses in a data center" that can work at superhuman speed across all domains - from law to biology to engineering. Unlike previous technological transitions that automated narrow tasks, AI is designed to automate all forms of human cognitive labor simultaneously, making it nearly impossible for humans to retrain fast enough to find new employment opportunities.
AI is fundamentally different from other technologies because intelligence created all other technologies. (13:33) When you advance rocketry, it doesn't advance biology, but when you advance intelligence, it advances everything. This is why AGI represents the most powerful technology ever invented - it's a "tractor for everything" rather than a narrow automation tool. The implications extend far beyond individual job displacement to potential concentration of all technological and scientific advancement in the hands of a few actors.
Despite the apparent impossibility, Harris argues that international AI governance is achievable by pointing to successful precedents. (46:37) He cites nuclear arms control, the Montreal Protocol for CFCs, and even recent US-China agreements on AI in nuclear command systems. The key is recognizing shared existential risks - when countries understand their mutual survival is threatened, they can cooperate even during maximum rivalry. The infrastructure for monitoring AI development through compute tracking and data center surveillance is technically feasible, similar to how we monitor nuclear programs.