#490 – State of AI in 2026: LLMs, Coding, Scaling Laws, China, Agents, GPUs, AGILex Fridman Podcast

#490 – State of AI in 2026: LLMs, Coding, Scaling Laws, China, Agents, GPUs, AGI

NaN分钟 ·
播放数495
·
评论数2

Nathan Lambert and Sebastian Raschka are machine learning researchers, engineers, and educators. Nathan is the post-training lead at the Allen Institute for AI (Ai2) and the author of The RLHF Book. Sebastian Raschka is the author of Build a Large Language Model (From Scratch) and Build a Reasoning Model (From Scratch).
Thank you for listening ❤ Check out our sponsors: lexfridman.com
See below for timestamps, transcript, and to give feedback, submit questions, contact Lex, etc.

Transcript:
lexfridman.com

CONTACT LEX:
Feedback – give feedback to Lex: lexfridman.com
AMA – submit questions, videos or call-in: lexfridman.com
Hiring – join our team: lexfridman.com
Other – other ways to get in touch: lexfridman.com

SPONSORS:
To support this podcast, check out our sponsors & get discounts:
Box: Intelligent content management platform.
Go to box.com
Quo: Phone system (calls, texts, contacts) for businesses.
Go to quo.com
UPLIFT Desk: Standing desks and office ergonomics.
Go to upliftdesk.com
Fin: AI agent for customer service.
Go to fin.ai
Shopify: Sell stuff online.
Go to shopify.com
CodeRabbit: AI-powered code reviews.
Go to coderabbit.ai
LMNT: Zero-sugar electrolyte drink mix.
Go to drinkLMNT.com
Perplexity: AI-powered answer engine.
Go to perplexity.ai

OUTLINE:
(00:00) – Introduction
(01:39) – Sponsors, Comments, and Reflections
(16:29) – China vs US: Who wins the AI race?
(25:11) – ChatGPT vs Claude vs Gemini vs Grok: Who is winning?
(36:11) – Best AI for coding
(43:02) – Open Source vs Closed Source LLMs
(54:41) – Transformers: Evolution of LLMs since 2019
(1:02:38) – AI Scaling Laws: Are they dead or still holding?
(1:18:45) – How AI is trained: Pre-training, Mid-training, and Post-training
(1:51:51) – Post-training explained: Exciting new research directions in LLMs
(2:12:43) – Advice for beginners on how to get into AI development & research
(2:35:36) – Work culture in AI (72+ hour weeks)
(2:39:22) – Silicon Valley bubble
(2:43:19) – Text diffusion models and other new research directions
(2:49:01) – Tool use
(2:53:17) – Continual learning
(2:58:39) – Long context
(3:04:54) – Robotics
(3:14:04) – Timeline to AGI
(3:21:20) – Will AI replace programmers?
(3:39:51) – Is the dream of AGI dying?
(3:46:40) – How AI will make money?
(3:51:02) – Big acquisitions in 2026
(3:55:34) – Future of OpenAI, Anthropic, Google DeepMind, xAI, Meta
(4:08:08) – Manhattan Project for AI
(4:14:42) – Future of NVIDIA, GPUs, and AI compute clusters
(4:22:48) – Future of human civilization

展开Show Notes
HD536909z
HD536909z
18小时前
我靠 这个广告直接拉到前16mins了
KirstenYang:我早上听时 也觉得 太长了 怎么还不进入主题😅