你有没有想过,当AI把所有练习册都做完时该怎么办?那些被它丢弃的“废稿”里,又藏着怎样的智慧?这一期,我们将一起探索AI如何像炼金术士一样点石成金,学会从百次尝试中预见万次风险,并揭开它思考时那个深藏不露的“工具箱”,看看它是如何学会聪明地“偷懒”的。
00:00:27 给AI一本永远也做不完的“练习册”
00:05:50 AI的“废稿”里,藏着通往智慧的捷径
00:11:28 大模型思考的秘密,它有几把刷子?
00:16:01 只需百次尝试,如何预见AI的万次风险?
00:20:50 AI的“降本增效”,一个聪明的偷懒办法
本期介绍的几篇论文:
[LG] Golden Goose: A Simple Trick to Synthesize Unlimited RLVR Tasks from Unverifiable Internet Text
[NVIDIA]
---
[CL] Residual Context Diffusion Language Models
[UC Berkeley]
---
[CL] Context Structure Reshapes the Representational Geometry of Language Models
[Google DeepMind]
---
[LG] Statistical Estimation of Adversarial Risk in Large Language Models under Best-of-N Sampling
[Microsoft Research]
---
[LG] EUGens: Efficient, Unified, and General Dense Layers
[Seoul National University]
![[人人能懂] 不止一把刷子:解密AI的思考工具箱与风险显微镜](https://image.xyzcdn.net/FuDP4HpAp8ezgVZMmEel3mblKCmJ.jpg@small)