2024.10.03 每日AI论文 | 分层调试提升代码准确性,多模态模型优化图像任务。

2024.10.03 每日AI论文 | 分层调试提升代码准确性,多模态模型优化图像任务。

14分钟 ·
播放数82
·
评论数0

本期的 20 篇论文如下:

00:23 🐞 From Code to Correctness: Closing the Last Mile of Code Generation with Hierarchical Debugging(从代码到正确性:通过分层调试解决代码生成的最后一步)

01:08 📄 LEOPARD : A Vision Language Model For Text-Rich Multi-Image Tasks(LEOPARD:用于文本丰富的多图像任务的视觉语言模型)

01:48 📊 Is Preference Alignment Always the Best Option to Enhance LLM-Based Translation? An Empirical Analysis(偏好对齐是否总是提升基于LLM的翻译的最佳选择?一项实证分析)

02:27 🖼 ComfyGen: Prompt-Adaptive Workflows for Text-to-Image Generation(ComfyGen:文本到图像生成的提示自适应工作流)

03:08 🧠 RATIONALYST: Pre-training Process-Supervision for Improving Reasoning(RATIONALYST:通过预训练过程监督改进推理)

03:45 🧠 Not All LLM Reasoners Are Created Equal(并非所有LLM推理器都相同)

04:18 📊 Quantifying Generalization Complexity for Large Language Models(量化大型语言模型的泛化复杂性)

04:59 🔍 3DGS-DET: Empower 3D Gaussian Splatting with Boundary Guidance and Box-Focused Sampling for 3D Object Detection(3DGS-DET:利用边界引导和框聚焦采样增强3D高斯喷洒进行3D物体检测)

05:45 🔄 HelpSteer2-Preference: Complementing Ratings with Preferences(HelpSteer2-Preference:通过偏好补充评分)

06:25 🗣 MOSEL: 950,000 Hours of Speech Data for Open-Source Speech Foundation Model Training on EU Languages(MOSEL:用于欧盟语言开源语音基础模型训练的95万小时语音数据)

07:03 🤖 Closed-loop Long-horizon Robotic Planning via Equilibrium Sequence Modeling(通过平衡序列建模实现闭环长期机器人规划)

07:40 🌐 EVER: Exact Volumetric Ellipsoid Rendering for Real-time View Synthesis(EVER:实时视图合成的精确体积椭球体渲染)

08:22 📄 FactAlign: Long-form Factuality Alignment of Large Language Models(FactAlign:大型语言模型的长篇事实对齐)

08:57 📹 E.T. Bench: Towards Open-Ended Event-Level Video-Language Understanding(E.T. 基准:面向开放式事件级视频语言理解)

09:37 🌍 BordIRlines: A Dataset for Evaluating Cross-lingual Retrieval-Augmented Generation(BordIRlines:评估跨语言检索增强生成的数据集)

10:13 🔊 SonicSim: A customizable simulation platform for speech processing in moving sound source scenarios(SonicSim:移动声源场景下语音处理的定制化仿真平台)

10:53 🔄 HarmoniCa: Harmonizing Training and Inference for Better Feature Cache in Diffusion Transformer Acceleration(HarmoniCa:在扩散Transformer加速中协调训练与推理以实现更好的特征缓存)

11:35 🔍 Selective Aggregation for Low-Rank Adaptation in Federated Learning(联邦学习中低秩适应的选择性聚合)

12:14 📚 Old Optimizer, New Norm: An Anthology(旧优化器,新范数:文集)

12:49 📱 InfiniPot: Infinite Context Processing on Memory-Constrained LLMs(InfiniPot:内存受限的LLM无限上下文处理)

【关注我们】

您还可以在以下平台找到我们,获得播客内容以外更多信息

小红书: AI速递