We talked with Christian Szegedy, co-inventor of Inception and Batch Normalization, founding scientist at xAI, now at Math Inc, about what it takes to build a frontier lab, and why he left xAI to work on formal mathematics. C...
We sat down with Rao Kambhampati, a Professor of CS at Arizona State University and former President of AAAI, to talk about reasoning models: what they are, when they work, and when they break. Rao has been working on plannin...
In this episode, we hosted Zhuang Liu , Assistant Professor at Princeton and former researcher at Meta, for a conversation about what actually matters in modern AI and what turns out to be a historical accident. Zhuang is beh...
We talked with Sasha Rush , researcher at Cursor and professor at Cornell, about what it actually feels like to we in the heart of the AI revolution and build coding agents right now. Sasha shared how these systems are changi...
How Denoising Secretly Powers Everything in AI Peyman Milanfar is a Distinguished Scientist at Google, leading its Computational Imaging team. He's a member of the National Academy of Engineering, an IEEE Fellow, and one of t...
Yaroslav Bulatov helped build the AI era from the inside, as one of the earliest researchers at both OpenAI and Google Brain. Now he wants to tear it all down and start over. Modern deep learning, he argues, is up to 100x mor...
We talk with Kyunghyun Cho, who is a Professor of Health Statistics and a Professor of Computer Science and Data Science at New York University, and a former Executive Director at Genentech, about why healthcare might be the ...
In this episode, we talk with Stefano Ermon, Stanford professor, co-founder & CEO of Inception AI, and co-inventor of DDIM, FlashAttention, DPO, and score-based/diffusion models, about why diffusion-based language models may...
Naomi Saphra, Kempner Research Fellow at Harvard and incoming Assistant Professor at Boston University, joins us to explain why you can't do interpretability without understanding training dynamics, in the same way you can't...
Stefano Soatto, VP for AI at AWS and Professor at UCLA, joins us to explore how the agentic era fundamentally redefines machine learning, from static train-and-test models to dynamic, interactive control systems. This shift u...
Tanishq Abraham , CEO and co-founder of Sophont.ai , joins us to talk about building foundation models specifically for medicine. Sophont is trying to be something like an OpenAI or Anthropic but for healthcare - training mo...
Anastasios Angelopoulos , Co-Founder and CEO of Arena AI (formerly LMArena), joins us to talk about why static benchmarks are failing, how human preference data actually works under the hood, and what it takes to be the "gold...
Fred Sala, Assistant Professor at UW-Madison and Chief Scientist at Snorkel AI, joins us to talk about why personalization might be the next frontier for LLMs, why data still matters more than architecture, and how weak super...
Bayan Bruss, VP of Applied AI at Capital One, joins us to talk about building AI systems that can make autonomous financial decisions, and why money might be the hardest problem in machine learning. Bayan leads Capital One's ...
David Mezzetti , creator of TextAI, joins us to talk about building open source AI frameworks as a solo developer - and why local-first AI still matters in the age of API-everything. David's path from running a 50-person IT c...
Cody Blakeney from Datology AI joins us to talk about data curation - the unglamorous but critical work of figuring out what to actually train models on. Cody's path from writing CUDA kernels to spending his days staring at w...
Guest: Niloofar Mireshghallah (Incoming Assistant Professor at CMU, Member of Technical Staff at Humans and AI) In this episode, we dive into AI privacy, frontier model capabilities, and why academia still matters. We kick of...
Yann LeCun – Why LLMs Will Never Get Us to AGI "The path to superintelligence - just train up the LLMs, train on more synthetic data, hire thousands of people to school your system in post-training, invent new tweaks on RL-I ...
Atlas Wang (UT Austin faculty, XTX Research Director) joins us to explore two fascinating frontiers: the foundations of symbolic AI and the practical challenges of building AI systems for quantitative finance. On the symbolic...
In this episode, we hosted Judah Goldfeder, a PhD candidate at Columbia University and student researcher at Google, to discuss robotics, reproducibility in ML, and smart buildings. Key topics covered: Robotics challenges: We...
In this episode, we talk with Will Brown, a research lead at Prime Intellect , about his journey into reinforcement learning (RL) and multi-agent systems, exploring their theoretical foundations and practical applications. We...
In this episode, we discuss various topics in AI, including the challenges of the conference review process, the capabilities of Kimi K2 thinking, the advancements in TPU technology, the significance of real-world data in rob...
In this episode, we sit down with Alex Alemi, an AI researcher at Anthropic (previously at Google Brain and Disney), to explore the powerful framework of the information bottleneck and its profound implications for modern mac...
In this episode, we talked about AI news and recent papers. We explored the complexities of using AI models in healthcare (the Nature Medicine paper on GPT-5's fragile intelligence in medical contexts). We discussed the delic...