Search for a command to run...

Timestamps are as accurate as they can be but may be slightly off. We encourage you to listen to the full context.
In this episode of Odd Lots, hosts Joe Wiesenthal and Tracy Alloway interview Alex Bores, a New York State Assembly member who has become the target of a $100 million AI industry super PAC. (08:20) Bores, who previously worked as a data scientist at Palantir, discusses his RAISE Act—legislation requiring AI companies to have public safety plans and disclose critical incidents. The conversation explores the intersection of AI regulation, politics, and technology policy, touching on everything from deepfakes to data centers. (30:00) The episode highlights how AI has already become a major political issue affecting labor markets, electricity costs, water consumption, and wealth inequality.
Co-host of Bloomberg's Odd Lots podcast and editor at Bloomberg. He brings extensive financial journalism experience and frequently covers technology, markets, and economic policy issues.
Co-host of Bloomberg's Odd Lots podcast and senior reporter at Bloomberg. She specializes in financial markets, technology, and policy analysis with a focus on how these intersect with everyday economic realities.
New York State Assembly member and candidate for the 12th Congressional District representing Manhattan. He has a master's degree in computer science and previously worked as a data scientist at Palantir from 2014-2019, eventually becoming one of five overall leads of the government business. (45:00) He is the first Democrat elected in New York State with a degree in computer science and has passed 27 bills during his three years in office.
Bores demonstrates that effective AI regulation requires lawmakers who understand the technology deeply. (08:50) His RAISE Act targets only companies spending $100 million on compute for final training runs and models with 10^26 computational operations, showing how technical precision can create focused regulation that doesn't burden smaller players. This approach counters the industry argument that any regulation stifles innovation by proving that knowledgeable lawmakers can craft targeted rules that address safety without hampering development.
Drawing from his Palantir experience, Bores emphasizes that "the work isn't done when the bill is signed." (35:14) He shared examples like his telemarketing fine bill, which increased actual fines by 4x after raising statutory maximums, and his failed moped registration bill that showed registration numbers dropped from 1,700 to 1,400 because people weren't re-registering. This data-driven approach to governance demonstrates how politicians can use metrics to assess and improve policy effectiveness over time.
Rather than expecting humans to detect AI-generated content, Bores advocates for cryptographic solutions like C2PA (Content Credential Provenance Authority). (28:49) This metadata standard can cryptographically prove whether content came from a real device or was AI-generated, similar to how HTTPS solved internet banking security. The key is making this the default option so that content without cryptographic proof becomes suspect, creating a technological rather than human-dependent solution to authenticity verification.
While AI companies have made voluntary commitments at White House summits and international gatherings, Bores argues these need legal backing. (16:00) The RAISE Act essentially codifies what companies already claim to do, preventing them from cutting safety measures during quarterly pressure or fundraising rounds. The industry's own estimate suggests compliance would require just one additional full-time employee for companies like Google or Meta, undermining claims that regulation would be burdensome.
With federal AI policy uncertain, states are leading on practical tech governance issues. (30:32) Bores highlighted New York's progress on deepfake pornography laws, chatbot disclosure requirements, and click-to-cancel subscription rules. This state-level innovation provides real protection for citizens while federal lawmakers struggle with comprehensive tech policy, demonstrating how local governance can address immediate technological harms affecting daily life.