00:00:33 高手与普通人的差距,在于“记忆预算”的分配
00:04:16 AI当“牛顿”:我们如何找到万物生长的公式?
00:08:26 让AI不止听话,更要会提问
00:12:09 AI 思考的艺术:如何做到又快又好?
00:18:06 AI识人心:20盘棋,就“看穿”了你
本期介绍的五篇论文:
[LG] Capacity-Constrained Continual Learning
[Google DeepMind]
---
[LG] EvoSLD: Automated Neural Scaling Law Discovery With Large Language Models
[Peking University & Tsinghua University]
---
[LG] Teaching Language Models To Gather Information Proactively
[Microsoft]
---
[LG] TriangleMix: A Lossless and Efficient Attention Pattern for Long Context Prefilling
[Microsoft Research]
---
[LG] Learning to Imitate with Less: Efficient Individual Behavior Modeling in Chess
[University of Toronto]
![[人人能懂] AI进化论:从记忆预算到思考的艺术](https://image.xyzcdn.net/FuDP4HpAp8ezgVZMmEel3mblKCmJ.jpg@small)