[人人能懂] AI的内功心法:少即是多,巧胜于拙

[人人能懂] AI的内功心法:少即是多,巧胜于拙

21分钟 ·
播放数169
·
评论数0

00:00:38 给AI的大脑做减法:一种聪明的“偷懒”智慧

00:04:34 你的数据正在“减肥”:人工智能时代的生存新法则

00:09:23 AI的“闭关修炼”:不给它网络,它能有多强?

00:13:07 给AI装上一个“语法”导航仪

00:16:49 AI的“通感”:为什么“我写的”和“我是作者”是一回事?

本期介绍的几篇论文:

[LG] XQuant: Breaking the Memory Wall for LLM Inference with KV Cache Rematerialization  

[UC Berkeley & FuriosaAI]  

arxiv.org

---

[LG] SoK: Data Minimization in Machine Learning  

[ETH Zurich]  

arxiv.org

---

[CL] SSRL: Self-Search Reinforcement Learning  

[Tsinghua University & Shanghai AI Laboratory]  

arxiv.org

---

[LG] Constrained Decoding of Diffusion LLMs with Context-Free Grammars  

[ETH Zurich]  

arxiv.org

---

[CL] A Rose by Any Other Name Would Smell as Sweet: Categorical Homotopy Theory for Large Language Models  

[Adobe Research]  

arxiv.org