EP14: AI News and Papers
In this episode, we talked about AI news and recent papers. We explored the complexities of using AI models in healthcare (the Nature Medicine paper on GPT-5's fragile intelligence in medical contexts). We discussed the delicate balance between leveraging LLMs as powerful research tools and the risks of over-reliance, touching on issues such as hallucinations, medical disagreements among practitioners, and the need for better education on responsible AI use in healthcare.
We also talked about Stanford's "Cartridges" paper, which presents an innovative approach to long-context language models. The paper tackles the expensive computational costs of billion-token context windows by compressing KV caches through a clever "self-study" method using synthetic question-answer pairs and context distillation. We discussed the implications for personalization, composability, and making long-context models more practical.
Additionally, we explored the "Continuous Autoregressive Language Models" paper and touched on insights from the Smol Training Playbook.
Papers discussed:
- The fragile intelligence of GPT-5 in medicine: https://www.nature.com/articles/s41591-025-04008-8
- Cartridges: Lightweight and general-purpose long context representations via self-study: https://arxiv.org/abs/2506.06266
- Continuous Autoregressive Language Models: https://arxiv.org/abs/2510.27688
- The Smol Training Playbook: https://huggingface.co/spaces/HuggingFaceTB/smol-training-playbook
Music:
“Kid Kodi” — Blue Dot Sessions — via Free Music Archive — CC BY-NC 4.0.
“Palms Down” — Blue Dot Sessions — via Free Music Archive — CC BY-NC 4.0.
Changes: trimmed
This is an experimental format for us, just news and papers without a guest interview. Let us know what you think!