EP24: Can AI Learn to Think About Money?" - with Bayan Bruss (Capital One)
Bayan Bruss, VP of Applied AI at Capital One, joins us to talk about building AI systems that can make autonomous financial decisions, and why money might be the hardest problem in machine learning.
Bayan leads Capital One's AI Foundations team, where they're working toward a destination most people don't associate with banking: getting AI systems to perceive financial ecosystems, form beliefs about the future, and take actions based on those beliefs. It's a framework that sounds simple until you realize you're asking a model to predict whether someone will pay back a loan over 30 years while the world changes around them.
We get into why LLMs are a bad fit for ingesting 5,000 credit card transactions, why synthetic data works surprisingly well for time series, and the tension between end-to-end learning and regulatory requirements that demand you know exactly what your model learned. We also discuss reasoning in language vs. in latent space - if you wouldn't trust a self-driving car that translated images to words before deciding to turn, should you trust a financial system that does all its reasoning in token space?
Takeaways:
- Money is a behavioral science problem - AI in finance requires understanding people, not just numbers.
- Foundation models pre-trained on web text don't outperform purpose-built models for financial tasks. You're better off building a standalone encoder for financial data.
- Synthetic data works surprisingly well for time series - possibly because real-world time series lives on a simpler manifold than we assume.
- Explainability in ML is fundamentally unsatisfying because people want causality from non-causal models.
- Financial AI needs world models that can imagine alternative futures, not just fit historical data.
Timeline:
(00:24) Introduction and Bayan's Background
(00:42) Claude Code, Vibe Coding - Hype or AGI?
(05:59) The Future of Software Engineering and Abstraction
(11:20) Abstraction Layers and Karpathy's Take
(13:54) Hamming, Kuhn, and Scientific Revolutions in AI
(19:24) Stack Overflow's Decline and Proof of Humanity
(23:07) Why We Still Trust Humans Over LLMs
(30:45) Deep Dive: AI in Banking and Consumer Finance
(34:17) Are Markets Efficient? Behavioral Economics vs. Classical Views
(37:14) The Components of a Financial Decision: Perception, Belief, Action
(42:15) Protected Variables, Proxy Features, and Fairness in Lending
(45:05) Explainability: Roller Skating on Marbles
(47:55) Sparse Autoencoders, Interpretability, and Turtles All the Way Down
(51:57) Foundation Models for Finance — Web Text vs. Purpose-Built
(53:09) Time Series, Synthetic Data, and TabPFN
(59:44) Feeding Tabular Data to VLMs - Graphs Beat Raw Numbers
(1:03:35) Reasoning in Language vs. Latent Space
(1:08:24) Is Language the Optimal Representation? Chinese Compression and Information Density
(1:13:37) Personalization and Predicting Human Behavior
(1:21:36) World Models, Uncertainty, and Professional Worrying
(1:24:07) Prediction Markets and Insider Betting
(1:26:33) Can LLMs Predict Stocks?
(1:29:11) Multi-Agent Systems for Financial Decisions
Music:
- "Kid Kodi" — Blue Dot Sessions — via Free Music Archive — CC BY-NC 4.0.
- "Palms Down" — Blue Dot Sessions — via Free Music Archive — CC BY-NC 4.0. Changes: trimmed
About: The Information Bottleneck is hosted by Ravid Shwartz-Ziv and Allen Roush, featuring in-depth conversations with leading AI researchers about the ideas shaping the future of machine learning.