Feb. 24, 2026

EP26: Measuring Intelligence in the Wild - Arena and the Future of AI Evaluation

EP26: Measuring Intelligence in the Wild -  Arena and the Future of AI Evaluation
The player is loading ...
EP26: Measuring Intelligence in the Wild -  Arena and the Future of AI Evaluation
Apple Podcasts podcast player iconSpotify podcast player icon
Apple Podcasts podcast player iconSpotify podcast player icon

Anastasios Angelopoulos, Co-Founder and CEO of Arena AI (formerly LMArena), joins us to talk about why static benchmarks are failing, how human preference data actually works under the hood, and what it takes to be the "gold standard" of AI evaluation.

Anastasios sits at a fascinating intersection -   a theoretical statistician running the platform that every major lab watches when they release a model. We talk about the messiness of AI-generated code slop (yes, he hides Claude's commits too), then dig into the statistical machinery that powers Arena's leaderboards and why getting evaluation right is harder than most people think.

We explore why style control is both necessary and philosophically tricky, where you can regress away markdown headers and response length, but separating style from substance is a genuinely unsolved causal inference problem. We also get into why users are surprisingly good judges of model quality, how Arena serves as a pre-release testing ground for labs shipping stealth models under codenames, and whether the fragmentation of the AI market (Anthropic going enterprise, OpenAI going consumer, everyone going multimodal) is actually a feature, not a bug. Plus, we discuss the role of rigorous statistics in the age of "just run it again," why structured decoding can hurt model performance, and what Arena's 2026 roadmap looks like.


Timeline:

(00:12) Introduction and Anastasios's Background

(00:55) What Arena Does and Why Static Benchmarks Aren't Enough

(02:26) Coverage of Use Cases - Is There Enough?

(04:22) Style Control and the Bradley-Terry Methodology

(08:35) Can You Actually Separate Style from Substance?

(10:24) Measuring Slop - And the Anti-Slop Paper Plug

(11:52) Can Users Judge Factual Correctness?

(13:31) Tool Use and Agentic Evaluation on Arena

(14:14) Intermediate Feedback Signals Beyond Final Preference

(15:30) Tool Calling Accuracy and Code Arena

(17:42) AI-Generated Code Slop and Hiding Claude's Commits

(19:49) Do We Need Separate Code Streams for Humans and LLMs?

(20:01) RL Flywheels and Arena's Preference Data

(21:16) Focus as a Startup - Being the Evaluation Company

(22:16) Structured vs. Unconstrained Generation

(25:00) The Role of Rigorous Statistics in the Age of AI

(29:23) LLM Sampling Parameters and Evaluation Complexity

(30:56) Model Versioning and the Frequentist Approach to Fairness

(32:12) Quantization and Its Effects on Model Quality

(33:10) Pre-Release Testing and Stealth Models (34:23) Transparency - What to Share with the Public vs. Labs

(36:27) When Winning Models Don't Get Released

(36:59) Why Users Keep Coming Back to Arena

(38:19) Market Fragmentation and Arena's Future Value

(39:37) Custom Evaluation Frameworks for Specific Users

(40:03) Arena's 2026 Roadmap - Science, Methodology, and New Paradigms

(42:15) The Economics of Free Inference

(43:13) Hiring and Closing Thoughts


Music:

  • "Kid Kodi" — Blue Dot Sessions — via Free Music Archive — CC BY-NC 4.0.
  • "Palms Down" — Blue Dot Sessions — via Free Music Archive — CC BY-NC 4.0.
  • Changes: trimmed

About:

The Information Bottleneck is hosted by Ravid Shwartz-Ziv and Allen Roush, featuring in-depth conversations with leading AI researchers about the ideas shaping the future of machine learning.