00:00:27 高手过招:抄作业还是自己闯?
00:04:38 AI也会“喜新厌旧”?高手是如何做到“学而不忘”的
00:09:36 让AI学会“脑补”,速度提升5倍的秘密
00:13:45 AI的“开窍”秘诀:如何让机器学会举一反三?
00:17:53 聪明反被聪明误?AI的“笨办法”
本期介绍的五篇论文:
[LG] Towards a Unified View of Large Language Model Post-Training
[Tsinghua University]
---
[LG] RL's Razor: Why Online Reinforcement Learning Forgets Less
[MIT]
---
[LG] Set Block Decoding is a Language Model Inference Accelerator
[FAIR at Meta]
---
[LG] ArcMemo: Abstract Reasoning Composition with Lifelong LLM Memory
[University of California, San Diego]
---
[LG] Learning When to Plan: Efficiently Allocating Test-Time Compute for LLM Agents
[University College London & University of Oxford]
![[人人能懂] 从死记硬背到举一反三的五堂课](https://image.xyzcdn.net/FuDP4HpAp8ezgVZMmEel3mblKCmJ.jpg@small)