In this episode, we talk with Stefano Ermon, Stanford professor, co-founder & CEO of Inception AI, and co-inventor of DDIM, FlashAttention, DPO, and score-based/diffusion models, about why diffusion-based language models may...
Naomi Saphra, Kempner Research Fellow at Harvard and incoming Assistant Professor at Boston University, joins us to explain why you can't do interpretability without understanding training dynamics, in the same way you can't...
Stefano Soatto, VP for AI at AWS and Professor at UCLA, joins us to explore how the agentic era fundamentally redefines machine learning, from static train-and-test models to dynamic, interactive control systems. This shift u...
Tanishq Abraham , CEO and co-founder of Sophont.ai , joins us to talk about building foundation models specifically for medicine. Sophont is trying to be something like an OpenAI or Anthropic but for healthcare - training mo...
Bayan Bruss, VP of Applied AI at Capital One, joins us to talk about building AI systems that can make autonomous financial decisions, and why money might be the hardest problem in machine learning. Bayan leads Capital One's ...
Cody Blakeney from Datology AI joins us to talk about data curation - the unglamorous but critical work of figuring out what to actually train models on. Cody's path from writing CUDA kernels to spending his days staring at w...
Guest: Niloofar Mireshghallah (Incoming Assistant Professor at CMU, Member of Technical Staff at Humans and AI) In this episode, we dive into AI privacy, frontier model capabilities, and why academia still matters. We kick of...
Atlas Wang (UT Austin faculty, XTX Research Director) joins us to explore two fascinating frontiers: the foundations of symbolic AI and the practical challenges of building AI systems for quantitative finance. On the symbolic...
In this episode, we hosted Judah Goldfeder, a PhD candidate at Columbia University and student researcher at Google, to discuss robotics, reproducibility in ML, and smart buildings. Key topics covered: Robotics challenges: We...
In this episode, we talk with Will Brown, a research lead at Prime Intellect , about his journey into reinforcement learning (RL) and multi-agent systems, exploring their theoretical foundations and practical applications. We...
In this episode, we discuss various topics in AI, including the challenges of the conference review process, the capabilities of Kimi K2 thinking, the advancements in TPU technology, the significance of real-world data in rob...
In this episode, we talked about AI news and recent papers. We explored the complexities of using AI models in healthcare (the Nature Medicine paper on GPT-5's fragile intelligence in medical contexts). We discussed the delic...
In this episode, we host Jonas Geiping from ELLIS Institute & Max-Planck Institute for Intelligent Systems, Tübingen AI Center, Germany. We talked about his broad research on Recurrent-Depth Models and late reasoning in large...
In this episode of the Information Bottleneck Podcast, we host Jack Morris, a PhD student at Cornell, to discuss adversarial examples (Jack created TextAttack , the first software package for LLM jailbreaking), the Platonic r...
In this episode we talk with Randall Balestriero, an assistant professor at Brown University. We discuss the potential and challenges of Joint Embedding Predictive Architectures (JEPA). We explore the concept of JEPA, which a...
In this episode, we talked with Michael Bronstein, a professor of AI at the University of Oxford and a scientific director at AITHYRA, about the fascinating world of geometric deep learning. We explored how understanding the ...
In this episode we host Tal Kachman, an assistant professor at Radboud University, to explore the fascinating intersection of artificial intelligence and natural sciences. Prof. Kachman's research focuses on multiagent intera...
In this episode, we talked with Ahmad Beirami, an ex-researcher at Google, to discuss various topics. We explored the complexities of reinforcement learning, its applications in LLMs, and the evaluation challenges in AI resea...
In this episode of the "Information Bottleneck" podcast, we hosted Aran Nayeb, an assistant professor at Carnegie Mellon University, to discuss the intersection of computational neuroscience and machine learning. We talked ab...
We talked with Ariel Noyman, an urban scientist, working in the intersection of cities and technology. Ariel is a research scientist at the MIT Media Lab, exploring novel methods of urban modeling and simulation using AI. We ...
We discussed the inference optimization technique known as Speculative Decoding with a world class researcher, expert, and ex-coworker of the podcast hosts: Nadav Timor. Papers and links: Accelerating LLM Inference with Lossl...
In this episode, Ravid and Allen discuss the evolving landscape of AI coding. They explore the rise of AI-assisted development tools, the challenges faced in software engineering, and the potential future of AI in creative fi...
Allen and Ravid discuss the dynamics associated with the extreme need for GPUs that AI researchers utilize. They also discuss the latest advancements in AI, including Google's Nano Banana and DeepSeek V3.1, exploring the impl...
Allen and Ravid sit down and talk about Parameter Efficient Fine Tuning (PeFT) along with the latest updated in AI/ML news.