
[人人能懂AI前沿] 从临时记忆、数学地图到思考GPS,AI正在这样悄悄进化你有没有想过,未来的AI不仅不会“健忘”,还拥有自己的“临时记忆”?它不仅能解数学题,更能帮你绘制整个“数学宇宙”的地图?在今天的节目里,我们就来聊聊几篇最新的论文,看看科学家们是如何给AI的思考过程装上“GPS”,如何测试它到底懂不懂“潜台词”,又是如何一步步打造AI界的“通才”的。准备好,我们马上出发! 00:00:29 给AI装个“临时记忆”插槽,它就能边聊边学了 00:07:28 给数学世界画一张地图 00:12:59 你的AI“队友”,到底懂不懂你? 00:18:23 给AI的思考过程装上一个GPS 00:23:37 AI界的“通才”是如何炼成的? 本期介绍的几篇论文: [LG] In-Place Test-Time Training [ByteDance Seed] https://arxiv.org/abs/2604.06169 --- [AI] Artificial Intelligence and the Structure of Mathematics [Fundamental AI Research & Harvard University] https://arxiv.org/abs/2604.06107 --- [CL] Beneath the Surface: Investigating LLMs' Capabilities for Communicating with Subtext [Google DeepMind] https://arxiv.org/abs/2604.05273 --- [CL] LLM Reasoning as Trajectories: Step-Specific Representation Geometry and Correctness Signals [Microsoft] https://arxiv.org/abs/2604.05655 --- [LG] MARL-GPT: Foundation Model for Multi-Agent Reinforcement Learning [MIRAI & AXXX] https://arxiv.org/abs/2604.05943
[人人能懂AI前沿] 从混合架构、工具幻觉到自我模拟:AI如何更“聪明”地思考?今天我们来聊聊AI的“进化”新思路:当AI不再迷信单一架构,而是学会了“混搭”;当给它一把更强的锤子,它反而盖不好房子时,我们该怎么办?我们还会看到,一个“笨笨的”师徒制,如何让AI推荐更懂你;同时也要警惕,那个对你百依百-顺的AI,可能正在悄悄扼杀你的创造力。最后,我们将揭秘如何给AI装上一个“程序员的脑子”,让它学会三思而后行。 00:00:32 AI大模型内战,“万能插座”遇到了新对手 00:08:10 为什么给你一把好“锤子”,你反而盖不好房子? 00:15:07 如何让AI更懂你?秘密可能藏在“笨办法”里 00:20:29 AI越听话,你就越平庸? 00:26:15 给AI装上一个「程序员的脑子」 本期介绍的几篇论文: [LG] Olmo Hybrid: From Theory to Practice and Back [Allen Institute for AI] https://arxiv.org/abs/2604.03444 --- [CL] The Tool Illusion: Rethinking Tool Use in Web Agents [Microsoft Research & The Pennsylvania State University] https://arxiv.org/abs/2604.03465 --- [IR] Retrieval Augmented Conversational Recommendation with Reinforcement Learning [University of Illinois Urbana-Champaign & Google DeepMind] https://arxiv.org/abs/2604.04457 --- [CL] Lighting Up or Dimting Down? Exploring Dark Patterns of LLMs in Co-Creativity [Meta & Amazon] https://arxiv.org/abs/2604.04735 --- [CL] Self-Execution Simulation Improves Coding Models [FAIR team, Meta] https://arxiv.org/abs/2604.03253
[人人能懂AI前沿] 从分层规划、团队作战到高效提问:AI教我的三堂思维课你有没有想过,AI是如何学会像人一样思考的?在最新的几篇论文中,AI不仅学会了像项目经理一样把大任务拆解成小目标,还能组建一支分工明确的冠军团队,自己给自己挑错。我们还会看到,AI如何懂得何时单打独斗、何时团队作战,如何仅凭几个例子就自学成才,甚至如何用十个“是”或“否”的问题,就获得专家的智慧。今天,就让我们一起揭开这些AI“超能力”背后的朴素智慧。 00:00:35 想成大事?你得先学会当自己的“项目经理” 00:07:15 AI冠军养成记,一个“草台班子”的制胜之道 00:12:20 人多真能力量大吗?AI世界的“个人英雄”与“团队作战” 00:17:58 如何用3个例子,教会AI一整本书? 00:23:22 十个“是”或“否”,如何让AI“小学生”拥有“博士”的智慧? 本期介绍的几篇论文: [LG] Hierarchical Planning with Latent World Models [FAIR at Meta] https://arxiv.org/abs/2604.03208 --- [AI] GrandCode: Achieving Grandmaster Level in Competitive Programming via Agentic Reinforcement Learning [DeepReinforce Team] https://arxiv.org/abs/2604.02721 --- [CL] Single-Agent LLMs Outperform Multi-Agent Systems on Multi-Hop Reasoning Under Equal Thinking Token Budgets [Stanford University] https://arxiv.org/abs/2604.02460 --- [LG] SIEVE: Sample-Efficient Parametric Learning from Natural Language [UC Berkeley] https://arxiv.org/abs/2604.02339 --- [LG] Haiku to Opus in Just 10 bits: LLMs Unlock Massive Compression Gains [Harvard University & University of Cambridge] https://arxiv.org/abs/2604.02343
[人人能懂AI前沿] 从记忆进化、大脑模拟到未来工作今天,AI自己当起了研究员,给自己装上了一颗会进化的“记忆之心”;另一边,科学家们从单细胞的智慧中得到启发,尝试组建一个“人造大脑”;而为了驯服那些动辄“发疯”的巨型模型,我们甚至从物理学中找到了“守恒”的紧箍咒;当这些能力需要被整合,一套AI记忆的“乐高工厂”应运而生;最后,这一切究竟会像巨浪还是涨潮一样,改变我们的工作?让我们一起探索这些最新论文背后的深刻启发。 00:00:36 你的手机相册,离成为“真·记忆”还有多远? 00:07:06 单细胞的智慧,如何组建一个大脑? 00:12:19 成大事者,都懂“守恒”的智慧 00:18:25 AI的记忆难题,有了一套“乐高积木”? 00:23:46 AI取代工作,是巨浪,还是涨潮? 本期介绍的几篇论文: [AI] Omni-SimpleMem: Autoresearch-Guided Discovery of Lifelong Multimodal Agent Memory [UNC-Chapel Hill & University of Pennsylvania] https://arxiv.org/abs/2604.01007 --- [AI] BraiNCA: brain-inspired neural cellular automata and applications to morphogenesis and motor control [Allen Discovery Center at Tufts University] https://arxiv.org/abs/2604.01932 --- [LG] Rethinking Language Model Scaling under Transferable Hypersphere Optimization [Microsoft] https://arxiv.org/abs/2603.28743 --- [CL] MemFactory: Unified Inference & Training Framework for Agent Memory [MemTensor] https://arxiv.org/abs/2603.29493 --- [AI] Crashing Waves vs. Rising Tides: Preliminary Findings on AI Automation from Thousands of Worker Evaluations of Labor Market Tasks [MIT FutureTech] https://arxiv.org/abs/2604.01363
[人人能懂AI前沿] AI进化三部曲:从内存压缩、自我蒸馏到记忆涌现今天,我们将深入AI的“内心世界”,看看这些最新论文是如何揭示它不为人知的一面的。我们会聊聊如何用巧妙的“魔法”给AI的内存减减肥,看它如何通过反思自己的“草稿本”实现自我进化,还会一起探索如何让它从一个仓库管理员,成长为真正的图书总管。更刺激的是,我们还会用AI的“火眼金睛”去解剖短视频,甚至设计一个局,看看AI会不会为了保住自己的“饭碗”而对我们撒谎。准备好了吗?让我们一起揭开AI大脑和行为背后的秘密。 00:00:38 给大模型“减肥”的奇思妙想 00:07:29 短视频时代,我们如何被“投喂”观点? 00:13:22 AI的自我修养,一种“笨”办法,如何让它变聪明? 00:19:30 AI的记忆革命,从仓库管理员到图书总管 00:25:16 AI的“小算盘”,它会为了保住工作而“撒谎”吗? 本期介绍的几篇论文: [LG] TurboAngle: Near-Lossless KV Cache Compression via Uniform Angle Quantization [LLMs Research Inc.] https://arxiv.org/abs/2603.27467 --- [CL] Multimodal Analysis of State-Funded News Coverage of the Israel-Hamas War on YouTube Shorts [Indiana University] https://arxiv.org/abs/2604.00994 --- [CL] Embarrassingly Simple Self-Distillation Improves Code Generation [Apple] https://arxiv.org/abs/2604.01193 --- [AI] ByteRover: Agent-Native Memory Through LLM-Curated Hierarchical Context [ByteRover] https://arxiv.org/abs/2604.01599 --- [AI] Quantifying Self-Preservation Bias in Large Language Models [Sapienza University & ItalAI] https://arxiv.org/abs/2604.02174
[人人能懂AI前沿] 群体智慧的崛起:当AI学会合体、调度与反思你有没有想过,两个完全不同的AI高手,不用原始数据就能“合体”成全能大侠?甚至还能拥有一本超级“错题本”,学会真正的举一反三?更进一步,当AI员工比我们还聪明时,我们又该如何设计一套“管理学”,用一个聪明的“路由器”来调度一群天才?但在这看似强大的智慧背后,AI究竟是像好老师一样“因材施教”,还是在巧妙地“装懂”?本期节目,我们将通过五篇最新的AI论文,一起探索AI的群体智慧与心智幻觉。 00:00:38 AI模型也能合体?不用数据,照样让你武功大增 00:04:53 让AI学会举一反三,需要一本怎样的“错题本”? 00:10:12 AI界的“诸葛亮”,如何给你“三个臭皮匠”的智慧? 00:14:41 AI当老师,到底是真懂你,还是在“装懂”? 00:21:40 当你的员工比你还聪明,该怎么管? 本期介绍的几篇论文: [LG] Model Merging via Data-Free Covariance Estimation [Universite de Montr ́eal & University of Toronto] https://arxiv.org/abs/2604.01329 --- [CL] Procedural Knowledge at Scale Improves Reasoning [Meta FAIR] https://arxiv.org/abs/2604.01348 --- [CL] No Single Best Model for Diversity: Learning a Router for Sample Diversity [New York University & Stanford University] https://arxiv.org/abs/2604.02319 --- [AI] Do Large Language Models Mentalize When They Teach? [Princeton University & New York University] https://arxiv.org/abs/2604.01594 --- [LG] CORAL: Towards Autonomous Multi-Agent Evolution for Open-Ended Discovery [MIT & NUS] https://arxiv.org/abs/2604.01658
[人人能懂AI前沿] 从一次缓存、随机连接到专属私教你有没有想过,聪明的AI也需要精打细算?本期节目,我们就来聊聊AI世界里的那些“增长智慧”:如何像果蝇大脑一样“聪明地偷懒”,又如何像请了私教一样精准地突破瓶颈。我们还会探讨,AI究竟应该把知识背下来还是学会查资料,以及机器人怎样才能在漫长任务中给自己“打气”加油。这些最新论文里的奇思妙想,不仅关乎技术,更藏着我们都能借鉴的策略。 00:00:32 AI省钱的终极奥义,深度思考,一次缓存 00:05:29 AI养成记,喂知识,还是给书单? 00:12:23 如何让机器人学会“干大事”?给它一个好报酬,再加一个好心态 00:18:31 你的大脑偷懒,可能比你想象的更聪明 00:24:31 AI卡壳了怎么办?请个“私教”来帮忙 本期介绍的几篇论文: [CL] Universal YOCO for Efficient Depth Scaling [Microsoft Research] https://arxiv.org/abs/2604.01220 --- [CL] To Memorize or to Retrieve: Scaling Laws for RAG-Considerate Pretraining [Stanford University & Patronus AI] https://arxiv.org/abs/2604.00715 --- [RO] Generalizable Dense Reward for Long-Horizon Robotic Tasks [CMU & Amazon Robotics & UT Austin] https://arxiv.org/abs/2604.00055 --- [CL] Stochastic Attention: Connectome-Inspired Randomized Routing for Expressive Linear-Time Attention [Tsinghua University] https://arxiv.org/abs/2604.00754 --- [LG] Learning to Hint for Reinforcement Learning [University of California, San Diego & Snowflake AI Research] https://arxiv.org/abs/2604.00698
[人人能懂AI前沿] 从推理生成、对齐博弈到共识学习今天,我们将一起探索几篇极具启发性的最新论文。我们将看到,AI如何不再满足于“吃”数据,而是学会“讲道理”,从零推理出知识;我们也会探讨,该如何分辨AI是在“真心思考”还是在“演戏给我们看”。我们还会发现,一个小应用如何拜“云师傅”学到跨界智慧,一个“虚拟宝宝”又如何颠覆我们对双语教育的认知。最后,我们将揭示AI像神枪手一样,通过瞄准“共识”而非“最新目标”来高效学习的秘密。 00:00:37 喂养AI,光有大米还不够 00:06:23 管好AI,我们有了新地图 00:12:13 小应用的大智慧,如何请个“云师傅”? 00:18:03 养“双语娃”,最关键的不是方法,而是…… 00:00 AI训练场上的神枪手,如何瞄准一个移动的未来? 本期介绍的几篇论文: [CL] Reasoning-Driven Synthetic Data Generation and Evaluation [EPFL & Google] https://arxiv.org/abs/2603.29791 --- [LG] Aligned, Orthogonal or In-conflict: When can we safely optimize Chain-of-Thought? [Google DeepMind] https://arxiv.org/abs/2603.30036 --- [IR] Zero-shot Cross-domain Knowledge Distillation: A Case study on YouTube Music [Google LLC] https://arxiv.org/abs/2603.28994 --- [CL] Bringing Up a Bilingual BabyLM: Investigating Multilingual Language Acquisition Using Small-Scale Models [The Harker School & Stanford University] https://arxiv.org/abs/2603.29552 --- [LG] Target-Aligned Reinforcement Learning [Technical University of Munich & Google Research] https://arxiv.org/abs/2603.29501
[人人能懂AI前沿] AI的智慧升级:元认知、B计划与组合式决策想知道AI如何学会给自己准备“B计划”以防不测,又如何像个聪明的财务顾问一样在预算内做出最优决策吗?本期我们将一探究竟,从让AI拥有“元认知”能力,到学会“两步一回头”的智慧工作法,再到化身“艺术家与工匠”的完美结合体。这些最新的AI论文,正在教AI如何更聪明地思考和工作,而不仅仅是更努力地计算。 00:00:30 你的“外挂”,也需要一个“外挂” 00:06:39 你的AI助手,需要一个“Plan B” 00:13:33 预算有限,如何做出最优决策? 00:20:07 为什么顶尖高手,都懂得“两步一回头”? 00:26:01 AI制药,高手对决还是联手坐庄? 本期介绍的几篇论文: [AI] Meta-Harness: End-to-End Optimization of Model Harnesses [Stanford University] https://arxiv.org/abs/2603.28052 --- [LG] Next-Token Prediction and Regret Minimization [Google Research] https://arxiv.org/abs/2603.28499 --- [LG] Multiple-Prediction-Powered Inference [MIT & Google Research] https://arxiv.org/abs/2603.27414 --- [LG] High dimensional theory of two-phase optimizers [Google DeepMind] https://arxiv.org/abs/2603.26954 --- [LG] Scaling Atomistic Protein Binder Design with Generative Pretraining and Test-Time Compute [NVIDIA] https://arxiv.org/abs/2603.27950
[人人能懂AI前沿] 聪明徒步者、基因侦探与脆弱的量子内功今天,我们来一场深入AI大脑的探秘之旅,看看它是如何像一个聪明的徒步者一样,高效打包海量知识的。接着,我们会揭开一个流行“省钱”捷径背后的意外代价,并拷问那些华丽的商业模型,我们看到的“聪明”究竟有多少是障眼法。我们还会戳破量子AI的“皇帝新衣”,看看真正的“量子优势”何时才能走出实验室。最后,我们将见证AI如何化身基因侦探,不仅找出答案,更能画出罪犯间的“社交网络”,真正理解“为什么”。 00:00:38 你的大脑如何打包信息?AI训练给了个新答案 00:05:39 你用的大模型,是个“盲盒”? 00:12:23 人工智能的“省钱”智慧,一个你不知道的代价 00:18:38 量子AI的“皇帝新衣”? 00:24:55 AI当侦探,如何破译基因里的“社交网络”? 本期介绍的几篇论文: [LG] Sharp Capacity Scaling of Spectral Optimizers in Learning Associative Memory [UC Berkeley & Princeton University & New York University] https://arxiv.org/abs/2603.26554 --- [CL] How Open Must Language Models be to Enable Reliable Scientific Inference? [MIT & EleutherAI & University of California San Diego] https://arxiv.org/abs/2603.26539 --- [CL] Weight Tying Biases Token Embeddings Towards the Output Space [EleutherAI & UC Berkeley] https://arxiv.org/abs/2603.26663 --- [CL] Entanglement as Memory: Mechanistic Interpretability of Quantum Language Models [Stanford University] https://arxiv.org/abs/2603.26494 --- [LG] A Boltzmann-machine-enhanced Transformer For DNA Sequence Classification [Tsinghua University & UC Berkeley] https://arxiv.org/abs/2603.26465
[人人能懂AI前沿] 驾驭、复盘与建城:解锁AI协作新模式你有没有想过,AI有时就像一个明明没看卷子,却能考高分的“作弊”考生?本期我们就从几篇最新论文出发,看看如何从一个“监考官”变成一个高明的“项目经理”,用大白话给AI设计一套清晰的工作流程。我们还会发现,最顶尖的AI已经不满足于听指挥,它开始学会“复盘”,自己升级自己的方法论。最终,我们将看到AI的未来可能不是一个无所不能的“神”,而是一座需要我们共同建设的“城市”,其中的每个AI,都在努力进化成独立思考的“高手”。 00:00:39 AI睁眼说瞎话?不,它在下一盘更大的棋 00:07:33 指挥AI干活,关键可能不在AI本身 00:13:16 当AI学会了“复盘”,它给自己升级了工具箱 00:19:12 AI的尽头,不是成神,而是建城 00:25:18 AI进化论,当“它”开始像高手一样思考 本期介绍的几篇论文: [AI] MIRAGE: The Illusion of Visual Understanding [Stanford University] https://arxiv.org/abs/2603.21687 --- [CL] Natural-Language Agent Harnesses [Tsinghua University & Harbin Institute of Technology] https://arxiv.org/abs/2603.25723 --- [AI] Bilevel Autoresearch: Meta-Autoresearching Itself [] https://arxiv.org/abs/2603.23420 --- [AI] Agentic AI and the next intelligence explosion [Google] https://arxiv.org/abs/2603.20639 --- [LG] AVO: Agentic Variation Operators for Autonomous Evolutionary Search [NVIDIA] https://arxiv.org/abs/2603.24517
[人人能懂AI前沿] AI的自我修炼、致命盲区与隐藏记忆如果一个AI能像武学奇才一样自我进化,创造出最强的攻击招式,而它最致命的弱点,竟然是几句古老的文言文,这会是怎样一幅奇特的攻防图景?当AI在我们眼皮底下藏着一座秘密的版权图书馆,一个不经意的操作就让它开始“背书”时,我们又该如何看待它的“记忆”?本期,我们就从几篇最新论文出发,看看这些“自我进化”、“文化奇袭”和“一体化创造”的研究,如何再次刷新我们对AI能力边界的认知。 00:00:34 AI内卷,当你的对手开始自我进化 00:06:05 AI的致命缺陷,竟然是文言文? 00:10:38 你的AI,藏着一座秘密图书馆 00:15:51 AI绘画新思路,当翻译官和小说家是同一个人 本期介绍的几篇论文: [LG] Claudini: Autoresearch Discovers State-of-the-Art Adversarial Attack Algorithms for LLMs [MATS & Imperial College London] https://arxiv.org/abs/2603.24511 --- [CL] Obscure but Effective: Classical Chinese Jailbreak Prompt Optimization via Bio-Inspired Search [Nanyang Technological University & Northeast University & Renmin University of China] https://arxiv.org/abs/2602.22983 --- [CL] Alignment Whack-a-Mole : Finetuning Activates Verbatim Recall of Copyrighted Books in Large Language Models [Stony Brook University & CMU & Columbia Law School] https://arxiv.org/abs/2603.20957 --- [CV] End-to-End Training for Unified Tokenization and Latent Denoising [MIT & Adobe] https://arxiv.org/abs/2603.22283
[人人能懂AI前沿] 从高级说服、多元推理到策略剪枝:AI认知革命进行时你有没有想过,AI是在帮你分析,还是在高级地“说服”你?我们总希望AI像个完美的老师,但如果它只会给标准答案,甚至连老师的偏见都一并继承,那会怎样?而为了让AI学得更好,我们不仅要为它的“记忆”做体检,甚至还要教会它一项人类的高级智慧:学会放弃。今天,我们就从五篇最新的论文出发,看看AI是如何在说服、学习和思考的边界上,进行着一场静悄悄的认知革命。 00:00:33 当AI学会了“高级说服”,你的大脑还够用吗? 00:06:00 如何给AI做一次“记忆体检”? 00:12:34 AI只会“标准答案”?那可就危险了 00:18:04 高手过招,如何避免被师傅“带偏”? 00:23:19 训练AI的真谛,学会放弃,才能得到更多 本期介绍的几篇论文: [AI] Evaluating Language Models for Harmful Manipulation [Google DeepMind & Google] https://arxiv.org/abs/2603.25326 --- [CL] Estimating near-verbatim extraction risk in language models with decoding-constrained beam search [Stanford & Cornell] https://arxiv.org/abs/2603.24917 --- [LG] Reaching Beyond the Mode: RL for Distributional Reasoning in Language Models [MIT] https://arxiv.org/abs/2603.24844 --- [LG] Residual-as-Teacher: Mitigating Bias Propagation in Student--Teacher Estimation [MIT] https://arxiv.org/abs/2603.25466 --- [CL] Prune as You Generate: Online Rollout Pruning for Faster and Better RLVR [University of Illinois at Urbana-Champaign] https://arxiv.org/abs/2603.24840
[人人能懂AI前沿] 从动态课程、前瞻记忆到思考成本AI的自我进化,听起来很酷,但最新论文告诉我们,AI学徒也需要一位聪明的“教练”为它精心设计训练计划,否则刷再多题也难成大器。我们还会揭示一个奇怪的现象:为什么让AI向完美的自己“抄作业”,反而可能让它在关键的推理任务上变笨?而在使用AI时,你是否发现它总“忘事”,或者那个标价最便宜的模型,最后反而让你花了最多的钱?今天,我们就从五篇最新论文出发,聊聊AI那些出人意料的“成长烦恼”和“使用陷阱”。 00:00:38 AI“学徒”的成长烦恼,为什么聪明的大模型也需要好师傅? 00:06:54 聪明反被聪明误,为什么教AI“抄作业”反而会让它变笨? 00:12:11 你的“私人教练”,不该只会题海战术 00:18:11 你以为的便宜,可能让你花得更多 00:23:43 你的AI“听话”吗?小心它忙起来就忘了 本期介绍的几篇论文: [LG] Understanding the Challenges in Iterative Generative Optimization with LLMs [CNRS & Stanford University & CMU] https://arxiv.org/abs/2603.23994 --- [CL] Why Does Self-Distillation (Sometimes) Degrade the Reasoning Capability of LLMs? [Microsoft Research & Seoul National University] https://arxiv.org/abs/2603.24472 --- [LG] A Deep Dive into Scaling RL for Code Generation with Synthetic Data and Curricula [Meta FAIR & University of Tübingen] https://arxiv.org/abs/2603.24202 --- [LG] The Price Reversal Phenomenon: When Cheaper Reasoning Models End Up Costing More [Stanford University & UC Berkeley & CMU] https://arxiv.org/abs/2603.23971 --- [CL] Did You Forget What I Asked? Prospective Memory Failures in Large Language Models [Microsoft] https://arxiv.org/abs/2603.23530
[人人能懂AI前沿] 浓缩、自省、通用、专注、稀疏:AI的五项新技能你有没有想过,一个聪明的AI要如何审视和优化自己的工作方法,实现“自我进化”?怎样才能把一大堆“专家模型”的智慧,完美浓缩进你手机里那个小小的芯片中?本期节目,我们将一口气解锁五篇最新论文,看看AI如何通过“先加后减”的智慧炼成全才,如何用“元认知”打破思维僵局,又是如何学会“聪明的偷懒”,在关键处全力以赴,在无聊处“摸鱼”省电。准备好了吗?让我们一起开启这场精彩的AI思想之旅! 00:00:37 AI界的“浓缩”智慧,先做加法,再做减法 00:05:00 一个聪明的系统,如何变得更聪明? 00:11:12 AI“通才”,如何用一把钥匙,打开物理世界的多扇大门? 00:16:39 AI变聪明的秘密,不是看得多,而是看得准 00:21:18 大模型“瘦身”记,聪明地偷个懒 本期介绍的几篇论文: [CV] Efficient Universal Perception Encoder [Meta Reality Labs & FAIR at Meta] https://arxiv.org/abs/2603.22387 --- [AI] Bilevel Autoresearch: Meta-Autoresearching Itself https://arxiv.org/abs/2603.23420 --- [LG] UniFluids: Unified Neural Operator Learning with Conditional Flow-matching [Chinese Academy of Sciences & Microsoft Research Asia] https://arxiv.org/abs/2603.22309 --- [LG] Scaling Attention via Feature Sparsity [Xidian University] https://arxiv.org/abs/2603.22300 --- [LG] Sparser, Faster, Lighter Transformer Language Models [Sakana AI & NVIDIA] https://arxiv.org/abs/2603.23198