Yann LeCun – Why LLMs Will Never Get Us to AGI "The path to superintelligence - just train up the LLMs, train on more synthetic data, hire thousands of people to school your system in post-training, invent new tweaks on RL-I ...
Atlas Wang (UT Austin faculty, XTX Research Director) joins us to explore two fascinating frontiers: the foundations of symbolic AI and the practical challenges of building AI systems for quantitative finance. On the symbolic...
In this episode, we hosted Judah Goldfeder, a PhD candidate at Columbia University and student researcher at Google, to discuss robotics, reproducibility in ML, and smart buildings. Key topics covered: Robotics challenges: We...
In this episode, we talk with Will Brown, a research lead at Prime Intellect , about his journey into reinforcement learning (RL) and multi-agent systems, exploring their theoretical foundations and practical applications. We...
In this episode, we discuss various topics in AI, including the challenges of the conference review process, the capabilities of Kimi K2 thinking, the advancements in TPU technology, the significance of real-world data in rob...
In this episode, we sit down with Alex Alemi, an AI researcher at Anthropic (previously at Google Brain and Disney), to explore the powerful framework of the information bottleneck and its profound implications for modern mac...
In this episode, we talked about AI news and recent papers. We explored the complexities of using AI models in healthcare (the Nature Medicine paper on GPT-5's fragile intelligence in medical contexts). We discussed the delic...
In this episode, we host Jonas Geiping from ELLIS Institute & Max-Planck Institute for Intelligent Systems, Tübingen AI Center, Germany. We talked about his broad research on Recurrent-Depth Models and late reasoning in large...
In this episode of the Information Bottleneck Podcast, we host Jack Morris, a PhD student at Cornell, to discuss adversarial examples (Jack created TextAttack , the first software package for LLM jailbreaking), the Platonic r...
In this episode we talk with Randall Balestriero, an assistant professor at Brown University. We discuss the potential and challenges of Joint Embedding Predictive Architectures (JEPA). We explore the concept of JEPA, which a...
In this episode, we talked with Michael Bronstein, a professor of AI at the University of Oxford and a scientific director at AITHYRA, about the fascinating world of geometric deep learning. We explored how understanding the ...
In this episode we host Tal Kachman, an assistant professor at Radboud University, to explore the fascinating intersection of artificial intelligence and natural sciences. Prof. Kachman's research focuses on multiagent intera...
In this episode, we talked with Ahmad Beirami, an ex-researcher at Google, to discuss various topics. We explored the complexities of reinforcement learning, its applications in LLMs, and the evaluation challenges in AI resea...
In this episode of the "Information Bottleneck" podcast, we hosted Aran Nayeb, an assistant professor at Carnegie Mellon University, to discuss the intersection of computational neuroscience and machine learning. We talked ab...
We talked with Ariel Noyman, an urban scientist, working in the intersection of cities and technology. Ariel is a research scientist at the MIT Media Lab, exploring novel methods of urban modeling and simulation using AI. We ...
We discussed the inference optimization technique known as Speculative Decoding with a world class researcher, expert, and ex-coworker of the podcast hosts: Nadav Timor. Papers and links: Accelerating LLM Inference with Lossl...
In this episode, Ravid and Allen discuss the evolving landscape of AI coding. They explore the rise of AI-assisted development tools, the challenges faced in software engineering, and the potential future of AI in creative fi...
Allen and Ravid discuss the dynamics associated with the extreme need for GPUs that AI researchers utilize. They also discuss the latest advancements in AI, including Google's Nano Banana and DeepSeek V3.1, exploring the impl...
Allen and Ravid sit down and talk about Parameter Efficient Fine Tuning (PeFT) along with the latest updated in AI/ML news.
In this episode of the Information Bottleneck Podcast, Ravid Shwartz-Ziv and Alan Rausch discuss the latest developments in AI, focusing on the controversial release of GPT-5 and its implications for users. They explore the f...