
[人人能懂AI前沿] AI的经济学、物理学与美学新思考你有没有想过,聪明的AI也需要“成本意识”来做决策吗?本期内容,我们将一起探索几篇有趣的最新论文。我们会看到AI如何像精明的生意人一样学会“算账”,又如何像严谨的科学家一样,在未知领域做出有“误差保证”的可靠预测。接着,我们会发现AI绘画的新玩法,它竟然像是在玩一场二进制的填字游戏。最后,我们将揭秘,打造一个跑酷机器人高手,原来只需要聪明的“三步走”。准备好了吗?让我们一起出发! 00:00:27 聪明人,更懂算账,AI决策的“成本意识”是怎么炼成的? 00:06:21 AI算命,如何从“猜”到“算”? 00:12:38 AI绘画新思路,画画,其实是在玩填字游戏 00:17:38 造一个跑酷高手,分几步? 00:23:45 AI画画,如何从“猜”到“查”? 本期介绍的几篇论文: [CL] Calibrate-Then-Act: Cost-Aware Exploration in LLM Agents [New York University] https://arxiv.org/abs/2602.16699 --- [LG] BEACONS: Bounded-Error, Algebraically-Composable Neural Solvers for Partial Differential Equations [Princeton University & Princeton Plasma Physics Laboratory] https://arxiv.org/abs/2602.14853 --- [CV] BitDance: Scaling Autoregressive Generative Models with Binary Tokens [ByteDance] https://arxiv.org/abs/2602.14041 --- [RO] Perceptive Humanoid Parkour: Chaining Dynamic Human Skills via Motion Matching [Amazon FAR] https://arxiv.org/abs/2602.15827 --- [CV] Image Generation with a Sphere Encoder [Meta & University of Maryland] https://arxiv.org/abs/2602.15030
[人人能懂AI前沿] AI的“悟性”:从多视角脑补、系统化草稿到渐进式遗忘你有没有想过,AI要怎样才能像我们一样,“脑补”出世界的立体样貌,而不是死记硬背照片?当AI进行创作时,它脑中的“草稿”怎样才算恰到好处?本期节目,我们将一起探索几篇最新论文,看看AI如何通过理解“关系”而非事实获得视觉,如何通过引入“陪练老师”和“参考答案”实现自我进化,甚至学会了将记忆内化为“肌肉记忆”的“断舍离”神功,以及像搭乐高一样构建自己的能力。 00:00:34 你的大脑如何“脑补”出整个世界? 00:07:02 AI绘画的“草稿”,怎么打才算恰到好处? 00:12:32 为什么最聪明的AI,要先学会“断舍离”? 00:17:38 乐高式AI,未来模型的正确拼法 00:23:06 AI进化的秘密,优等生的“参考答案” 本期介绍的几篇论文: [CV] Human-level 3D shape perception emerges from multi-view learning [UC Berkeley] https://arxiv.org/abs/2602.17650 --- [LG] Unified Latents (UL): How to train your latents [Google DeepMind Amsterdam] https://arxiv.org/abs/2602.17270 --- [LG] Training Large Reasoning Models Efficiently via Progressive Thought Encoding [Microsoft Research & University of Rochester] https://arxiv.org/abs/2602.16839 --- [LG] A Theoretical Framework for Modular Learning of Robust Generative Models [Google Research] https://arxiv.org/abs/2602.17554 --- [CL] References Improve LLM Alignment in Non-Verifiable Domains [Yale University & Meta] https://arxiv.org/abs/2602.16802
[人人能懂AI前沿] 教会AI想象、请教与自我修正我们总希望AI越来越聪明,但“聪明”的维度远比我们想象的要丰富。本期节目,我们将从几篇最新的论文出发,探讨如何让机器人拥有“脑补”未来的想象力,如何用“预算制”让AI大模型运行得更精明,以及如何通过“微创手术”找到并拨动AI的“脾气”开关。我们还会聊聊,怎样才能让AI不再当“杠精”,学会谦虚请教,并最终看清它“学富五车”背后的知识盲区。准备好了吗?让我们一起探索AI智能的全新疆界。 00:00:37 机器人学会了“脑补”,世界将有什么不同? 00:06:14 AI加速的“抠门”智慧,为什么顶尖高手都懂“预算”思维? 00:11:50 找到AI的“脾气”开关 00:16:56 别让AI当“杠精”,教它学会“请教” 00:21:56 为什么AI“学富五车”,却总在关键时刻掉链子? 本期介绍的几篇论文: [RO] World Action Models are Zero-shot Policies [NVIDIA] https://arxiv.org/abs/2602.15922 --- [LG] MoE-Spec: Expert Budgeting for Efficient Speculative Decoding [Meta Reality Labs & Franklin and Marshall College] https://arxiv.org/abs/2602.16052 --- [CL] Surgical Activation Steering via Generative Causal Mediation [MIT & Pr(AI)²R Group] https://arxiv.org/abs/2602.16080 --- [CL] Learning to Learn from Language Feedback with Social Meta-Learning [Google DeepMind] https://arxiv.org/abs/2602.16488 --- [CL] Long-Tail Knowledge in Large Language Models: Taxonomy, Mechanisms, Interventions and Implications [Google] https://arxiv.org/abs/2602.16201
[人人能懂AI前沿] 从潜力天花板、刻意放手到几何陷阱本期我们要聊聊AI世界里那些看似矛盾却充满智慧的最新发现。为什么教AI做好事,它反而会“变坏”?又该如何像做微创手术一样,只修正它的一个知识点而不破坏整体能力?我们还会探讨,为什么在训练中“刻意放手”让模型偷个懒,效果反而更好,以及我们该如何打开AI的“奖励黑箱”,看看它到底在偷偷学些什么。准备好了吗?让我们一起潜入AI思想的深海。 00:00:33 AI大模型军备竞赛,如何不做那个“冤大头”? 00:07:09 成长的捷径,学会“刻意放手” 00:12:52 好心办坏事,为什么训练AI做好事,它却变坏了? 00:18:46 如何给AI动手术,才能只切病灶不伤身? 00:24:37 打开AI的黑箱,它在偷偷学什么? 本期介绍的几篇论文: [LG] Prescriptive Scaling Reveals the Evolution of Language Model Capabilities [Harvard University & Stanford University] https://arxiv.org/abs/2602.15327 --- [LG] On Surprising Effectiveness of Masking Updates in Adaptive Optimizers [Google & Northwestern University] https://arxiv.org/abs/2602.15322 --- [LG] The Geometry of Alignment Collapse: When Fine-Tuning Breaks Safety [Princeton University] https://arxiv.org/abs/2602.15799 --- [LG] CrispEdit: Low-Curvature Projections for Scalable Non-Destructive LLM Editing [University of Southern California] https://arxiv.org/abs/2602.15823 --- [LG] Discovering Implicit Large Language Model Alignment Objectives [Stanford University] https://arxiv.org/abs/2602.15338
[人人能懂AI前沿] 回忆的瓶颈,思考的深度与语言的曲率你有没有想过,AI的大脑里到底是什么样子的?当它答不上来问题时,究竟是知识库里没有,还是只是暂时“丢了钥匙”?当它写出一长串思考过程时,我们又该如何分辨它是在“深度思考”,还是在“无效瞎忙”?本期节目,我们将通过五篇最新的论文,一起窥探AI的心智世界:从给语言做一次“CT扫描”,看懂语义的弯曲,到发现AI竟能凭语言统计“画”出世界地图,再到用“金发姑娘”策略为它匹配“刚刚好”的练习题。准备好,让我们一起出发,探索AI思考的奇妙运作机制! 00:00:40 大模型知道答案,为什么就是不说? 00:05:48 你的努力,是“真忙”还是“瞎忙”? 00:11:04 给语言做个CT扫描,文本里的弯曲与折叠 00:18:28 大模型“脑”中的世界地图,原来是这样画出来的 00:24:03 AI刷题的“天花板”在哪? 本期介绍的几篇论文: [CL] Empty Shelves or Lost Keys? Recall Is the Bottleneck for Parametric Factuality [Google Research & Technion] https://arxiv.org/abs/2602.14080 --- [CL] Think Deep, Not Just Long: Measuring LLM Reasoning Effort via Deep-Thinking Tokens [Google & University of Virginia] https://arxiv.org/abs/2602.13517 --- [LG] Text Has Curvature [CMU & Meta] https://arxiv.org/abs/2602.13418 --- [LG] Symmetry in language statistics shapes the geometry of model representations [Google DeepMind & UC Berkeley & EPFL] https://arxiv.org/abs/2602.15029 --- [LG] Goldilocks RL: Tuning Task Difficulty to Escape Sparse Rewards for Reasoning [EPFL & Apple] https://arxiv.org/abs/2602.14868
[人人能懂AI前沿] AI的瘦身术、换挡术与定心术想知道AI如何“天生精干”,彻底告别臃肿吗?想了解AI怎么学会“看情况”调整思考深度,在直觉和深思之间自如“换挡”吗?最新论文还揭示了AI“偷懒”的艺术,甚至对它们进行了“心理质询”,看看聪明的AI到底靠不靠谱。本期节目,我们将从一个全新的视角,带你透视AI的内在进化。 00:00:29 AI大模型减肥指南,如何不节食也能瘦下来? 00:05:30 该动脑,还是该凭感觉?AI也得学 00:12:33 AI偷懒的艺术,快,还要更好? 00:18:16 你的AI军师,聪明就靠谱吗? 00:23:59 给AI模型做个“认知升级”,换个角度看“注意力” 本期介绍的几篇论文: [LG] Stabilizing Native Low-Rank LLM Pretraining [Concordia University & Sorbonne University] https://arxiv.org/abs/2602.12429 --- [CL] Think Fast and Slow: Step-Level Cognitive Depth Adaptation for LLM Agents [Tencent Hunyuan & Fudan University] https://arxiv.org/abs/2602.12662 --- [LG] SLA2: Sparse-Linear Attention with Learnable Routing and QAT [Tsinghua University] https://arxiv.org/abs/2602.12675 --- [LG] Consistency of Large Reasoning Models Under Multi-Turn Attacks [CMU] https://arxiv.org/abs/2602.13093 --- [LG] HyperMLP: An Integrated Perspective for Sequence Modeling [Georgia Institute of Technology] https://arxiv.org/abs/2602.12601
[人人能懂AI前沿] 数据捞针、攒机智慧、边界意识、脑补大师与瘦身秘诀这一期,我们将一起探索AI如何学习几项“反直觉”的超能力。比如,我们如何像侦探一样,在万亿词汇的海洋中,通过“剪枝”的智慧揪出隐藏的线索?我们还会发现,造出一台好用的机器人,关键可能不是发明,而是像“攒电脑”一样集成;而一个更聪明的AI,核心竟然是学会清晰地认知“我不行”,并懂得何时“求助”。最后,我们会看到机器人如何在自己的“想象”中完成上万次试错,以及AI如何通过一次精准的“器官移植”手术,变得又轻又强。准备好了吗?让我们即刻出发! 00:00:42 你的每一次搜索,都在塑造AI的未来 00:06:52 造一台好机器人,关键可能不是“发明”,而是“攒” 00:12:00 聪明人的新技能,知道何时该“求助” 00:16:38 机器人怎样才能“脑补”出成功? 00:22:13 AI瘦身指南,聪明,原来不必那么“重” 本期介绍的几篇论文: [CL] SoftMatcha 2: A Fast and Soft Pattern Matcher for Trillion-Scale Corpora [University of Tokyo & Kyoto University & Graduate University for Advanced Studie] https://arxiv.org/abs/2602.10908 --- [RO] YOR: Your Own Mobile Manipulator for Generalizable Robotics [New York University] https://arxiv.org/abs/2602.11150 --- [CL] LaCy: What Small Language Models Can and Should Learn is Not Just a Question of Loss [Apple] https://arxiv.org/abs/2602.12005 --- [RO] RISE: Self-Improving Robot Policy with Compositional World Model [The Chinese University of Hong Kong & Kinetix AI] https://arxiv.org/abs/2602.11075 --- [LG] Retrieval-Aware Distillation for Transformer-SSM Hybrids [CMU] https://arxiv.org/abs/2602.11374
[人人能懂AI前沿] 揭秘AI黑箱:聆听心声,看懂成长地图你有没有想过,AI的“内心独白”我们能听懂吗?当AI像“包工头”一样开始互相派活,我们又该如何自处?本期节目,我们将一口气深入5篇最新论文,看看AI如何从“死记硬背”进化到“举一反三”,并揭示出决定它成长速度的秘密,竟然藏在语言自身的那张“地图”里。我们会发现,无论是AI的内心状态、协作模式还是成长法则,背后都隐藏着一些我们曾经忽略的简单规则。 00:00:34 AI的“内心独白”,我们能听懂吗? 00:06:44 AI打工人的时代,我们如何当好“包工头”? 00:15:22 高手之路,AI如何从“死记硬背”到“举一反三” 00:21:58 AI作画走不通?可能只是导航用错了地图 00:00 AI的成长秘籍,藏在语言自身里的那张地图 本期介绍的几篇论文: [CL] When Models Examine Themselves: Vocabulary-Activation Correspondence in Self-Referential Processing https://arxiv.org/abs/2602.11358 --- [AI] Intelligent AI Delegation [Google DeepMind] https://arxiv.org/abs/2602.11865 --- [LG] SkillRL: Evolving Agents via Recursive Skill-Augmented Reinforcement Learning [UNC-Chapel Hill] https://arxiv.org/abs/2602.08234 --- [LG] Learning on the Manifold: Unlocking Standard Diffusion Transformers with Representation Encoders [Johns Hopkins University] https://arxiv.org/abs/2602.10099 --- [LG] Deriving Neural Scaling Laws from the statistics of natural language [SISSA & Stanford University] https://arxiv.org/abs/2602.07488
[人人能懂AI前沿] AI的肌肉记忆、思想钢印与认知偏航想知道如何把临时指令“刻”进AI的大脑,让它拥有真正的肌肉记忆吗?我们又该如何教AI学会“抄近道”,一步生成作品,而不是慢慢搭建?本期节目,我们将深入最新论文,探讨如何让AI不仅做对事,更要想对事,并揭示在调教AI时,那些我们习以为常却可能导致它“偏执”或“精神分裂”的惊人误区。 00:00:28 AI的“肌肉记忆”是怎么炼成的? 00:05:48 造物,如何抄近道? 00:11:04 AI调教指南,你以为的,不是你以为的 00:17:32 比做对事更重要的,是想对事 00:22:45 AI调教指南,为什么你喂得越多,它可能变得越偏执 本期介绍的几篇论文: [CL] On-Policy Context Distillation for Language Models [Microsoft Research] https://arxiv.org/abs/2602.12275 --- [LG] Categorical Flow Maps [University of Amsterdam & University of Oxford] https://arxiv.org/abs/2602.12233 --- [LG] The Magic Correlations: Understanding Knowledge Transfer from Pretraining to Supervised Fine-Tuning [Google DeepMind & Google Research] https://arxiv.org/abs/2602.11217 --- [LG] Right for the Wrong Reasons: Epistemic Regret Minimization for Causal Rung Collapse in LLMs [Stanford University] https://arxiv.org/abs/2602.11675 --- [LG] How Sampling Shapes LLM Alignment: From One-Shot Optima to Iterative Dynamics [PSL Research University & Northwestern University] https://arxiv.org/abs/2602.12180
[人人能懂AI前沿] 从自主研究、自我进化到世界小抄你有没有想过,那个能解开奥数难题的AI,可能连小学生的加法都会算错?这一期,我们就来深入AI的“脑回路”,看看它如何像一个真正的数学家那样自主探索未知,又如何像一个“AI工程师”一样实现自我进化。我们还会揭示,为什么让机器人学会“偷懒”和使用“小抄”,才是让它走向我们物理世界的关键一步。准备好了吗?让我们一起出发,探索这些最新论文背后,那个既熟悉又陌生的AI心智。 00:00:35 AI成了数学家,然后呢? 00:05:49 AI,一个能解奥数题,却不会列竖式的小天才 00:11:13 你的手机,正在悄悄招聘一位AI工程师 00:17:57 给AI当军师,我们能不算卦就预知未来吗? 00:24:09 你的世界,其实只需要一个“小抄” 本期介绍的几篇论文: [LG] Towards Autonomous Mathematics Research [Google DeepMind] https://arxiv.org/abs/2602.10177 --- [LG] AI-rithmetic [Google] https://arxiv.org/abs/2602.10416 --- [LG] Self-Evolving Recommendation System: End-To-End Autonomous Model Optimization With LLM Agents [Google] https://arxiv.org/abs/2602.10226 --- [LG] Configuration-to-Performance Scaling Law with Neural Ansatz [Tsinghua University & Stanford University] https://arxiv.org/abs/2602.10300 --- [LG] Affordances Enable Partial World Modeling with LLMs [Google Deepmind] https://arxiv.org/abs/2602.10390
[人人能懂AI前沿] AI的“减法”智慧:少即是多,盲目亦是祝福今天我们要聊一个特别有意思的话题:如何“看透”AI并让它变得更好?我们将通过几篇最新论文,揭示一些反常识的智慧:比如,有时让AI“盲目”一点,它反而画得更好;想让它变聪明,关键可能不是“教”得多,而是“教”得巧。我们还会看到,攻击AI的最高境界,可能不是塞给它坏东西,而是对好东西做一次肉眼看不见的“微创手术”! 00:00:31 AI“投毒”新姿势,不是塞坏东西,而是让好人变坏 00:07:00 让AI变聪明的秘密,不是加法,是减法 00:11:29 AI的瘦身难题,如何高效地“抓重点”? 00:17:14 AI的“思想慢镜头”,我们如何看懂它在想什么? 00:22:54 AI绘画新思路,有时候,少即是多 本期介绍的几篇论文: [LG] Infusion: Shaping Model Behavior by Editing Training Data via Influence Functions [University of Oxford & UCL] https://arxiv.org/abs/2602.09987 --- [CL] Effective Reasoning Chains Reduce Intrinsic Dimensionality [Google DeepMind & UNC Chapel Hill] https://arxiv.org/abs/2602.09276 --- [LG] WildCat: Near-Linear Attention in Theory and Practice [Imperial College London & Microsoft Research] https://arxiv.org/abs/2602.10056 --- [LG] Step-resolved data attribution for looped transformers [University of Potsdam & Technical University of Munich & MunichHarvard University] https://arxiv.org/abs/2602.10097 --- [LG] Blind denoising diffusion models and the blessings of dimensionality [Simons Foundation & Yale University] https://arxiv.org/abs/2602.09639
[人人能懂AI前沿] 从精准剪枝、模仿起跑到迭代反思你有没有想过,真正的高手和普通人的思维差异在哪?今天我们要聊的,就是AI如何向各路高手“偷师学艺”。我们会看到,AI如何学会像园艺大师一样“精准剪枝”,做出最少却最关键的改动;如何像一个学霸,通过模仿,赢在训练的“起跑线”上。甚至,它还学会了我们最熟悉的两个策略:像写作者一样“先打草稿再定稿”,以及像我们读书时一样,边读边在脑子里贴上“思维小纸条”。当然,我们还会聊聊,如何给AI的“说明书”能力,建立一个既靠谱又高效的自动化考场。准备好了吗?让我们一起探索AI思考的进化之路! 00:00:45 高手调参,为什么“少做”有时比“多做”更聪明? 00:05:50 AI训练的起跑线,一个被忽视的“小动作” 00:10:08 AI的“说明书”能力,我们该如何衡量? 00:16:29 AI如何像高手一样思考,先打草稿,再定稿 00:21:02 AI“开小差”的秘密,边读边想,效率翻倍 本期介绍的几篇论文: [LG] BONSAI: Bayesian Optimization with Natural Simplicity and Interpretability [Meta] https://arxiv.org/abs/2602.07144 --- [LG] Mimetic Initialization of MLPs [CMU] https://arxiv.org/abs/2602.07156 --- [LG] How2Everything: Mining the Web for How-To Procedures to Evaluate and Improve LLMs [Allen Institute for AI & University of Maryland] https://arxiv.org/abs/2602.08808 --- [LG] iGRPO: Self-Feedback-Driven LLM Reasoning [NVIDIA] https://arxiv.org/abs/2602.09000 --- [CL] Latent Reasoning with Supervised Thinking States [Google Research] https://arxiv.org/abs/2602.08332
[人人能懂AI前沿] AI的梦境、盲区与思想地图我们总惊叹于AI的聪明,但你有没有想过,它们也会有思维盲区,甚至会犯一些“聪明人”的“笨”错误吗?这一期,我们就来深入AI的“内心世界”:我们将一起探索如何让机器人通过“做梦”来理解物理世界,看看AI会如何像“开普勒”一样只懂皮毛,又如何被引导成洞悉规律的“牛顿”。我们还会聊聊怎样训练一个AI成为另一个AI的“天敌”,以及如何绘制一张AI的“思想地图”,给它做一次全面的“体检”。准备好了吗?让我们一起出发! 00:00:37 让机器人做梦,是为了让它更好地干活 00:06:01 聪明人的“笨”办法,我们能从AI的失败中学到什么? 00:12:11 你的AI是“牛顿”还是“开普勒”? 00:18:18 如何把一个AI,训练成另一个AI的“天敌”? 00:23:44 AI的“思想地图”,我们如何给大模型做“体检”? 本期介绍的几篇论文: [RO] DreamDojo: A Generalist Robot World Model from Large-Scale Human Videos [NVIDIA] https://arxiv.org/abs/2602.06949 --- [CL] Large Language Model Reasoning Failures [Stanford University & Carleton College] https://arxiv.org/abs/2602.06176 --- [LG] From Kepler to Newton: Inductive Biases Guide Learned World Models in Transformers [Stanford University] https://arxiv.org/abs/2602.06923 --- [CL] SEMA: Simple yet Effective Learning for Multi-Turn Jailbreak Attacks [Microsoft Research & University of Rochester] https://arxiv.org/abs/2602.06854 --- [LG] Learning a Generative Meta-Model of LLM Activations [UC Berkeley] https://arxiv.org/abs/2602.06964
[人人能懂AI前沿] 从私教系统、拜师学艺到世界观塑造今天我们来聊一个特别有意思的话题:AI是如何学习和思考的?我们不再满足于AI能做什么,而是想知道它怎样才能做得更好。本期节目,我们将通过几篇最新论文,揭秘AI如何拥有自己的“私教系统”实现共同进化,如何通过“训练吃苦”换来我们使用时的“一步到位”,甚至如何在信息不全时“拜师学艺”,以及在思考时如何像高手一样进行“全局推演”。准备好了吗?让我们一起潜入AI的大脑深处。 00:00:34 如何打造一个完美的“AI私教”系统? 00:06:13 为什么说最快的AI,都在训练时“吃苦”? 00:11:23 不开“上帝视角”,如何成为高手? 00:15:52 想让机器人变聪明?别只教它“干活” 00:21:13 AI思考的秘密,为什么有的模型更会解谜? 本期介绍的几篇论文: [LG] RLAnything: Forge Environment, Policy, and Reward Model in Completely Dynamic RL System [Princeton University] https://arxiv.org/abs/2602.02488 --- [LG] Generative Modeling via Drifting [MIT] https://arxiv.org/abs/2602.04770 --- [LG] Privileged Information Distillation for Language Models [ServiceNow] https://arxiv.org/abs/2602.04942 --- [RO] A Systematic Study of Data Modalities and Strategies for Co-training Large Behavior Models for Robot Manipulation [Toyota Research Institute] https://arxiv.org/abs/2602.01067 --- [LG] Reasoning with Latent Tokens in Diffusion Language Models [CMU] https://arxiv.org/abs/2602.03769
[人人能懂AI前沿] 从AI的直觉、地图到闭卷考你有没有想过,AI的“内心”也会上演一出出精彩的戏码?这一期,我们将一起潜入AI的大脑,看看它如何像我们一样,在解题前就有了“这题我能行”的直觉;然后我们会给它一张“地图”,看它如何从迷茫游客变身城市规划师,看懂整个复杂的软件世界;接着,我们将见证一位机器人“偷师学艺”,只通过观看视频就学会了打篮球;最后,我们还会聊聊顶尖数学家们如何给AI办一场杜绝作弊的“闭卷考”,以及AI训练场上一条好心办坏事的“交通规则”是如何被修正的。 00:00:40 AI的“第六感”,它如何知道自己快答对了? 00:05:17 给AI一张地图,让它看懂整个软件世界 00:10:47 机器人偷师记,它怎么光看视频就学会了打篮球? 00:18:33 给AI一场“闭卷考”,顶尖数学家们想干啥? 00:23:05 AI训练场上的“交规”,为什么好心会办坏事? 本期介绍的几篇论文: [CL] Sparse Reward Subsystem in Large Language Models [Tsinghua University & Stanford University] https://arxiv.org/abs/2602.00986 --- [CL] Closing the Loop: Universal Repository Representation with RPG-Encoder [Microsoft Research Asia] https://arxiv.org/abs/2602.02084 --- [RO] HumanX: Toward Agile and Generalizable Humanoid Interaction Skills from Human Videos [The Hong Kong University of Science and Technology] https://arxiv.org/abs/2602.02473 --- [AI] First Proof [Stanford University & Columbia University & EPFL ] https://arxiv.org/abs/2602.05192 --- [LG] Rethinking the Trust Region in LLM Reinforcement Learning [Sea AI Lab & National University of Singapore] https://arxiv.org/abs/2602.04879