
- “China’s digging out of a crisis. And America’s luck is wearing thin.” — Ken Rogoff
Ken Rogoff is the former chief economist of the IMF, a professor of Economics at Harvard, and author of the newly released Our Dollar, Your Problem and This Time is Different. On this episode, Ken predicts that, within the next decade, the US will have a debt-induced inflation crisis, but not a Japan-type financial crisis (the latter is much worse, and can make a country poorer for generations). Ken also explains how China is trapped: in order to solve their current problems, they’ll keep leaning on financial repression and state-directed investment, which only makes their situation worse. We also discuss the erosion of dollar dominance, why there will be a rebalancing toward foreign equities, how AGI will impact the deficit and interest rate, and much more! Watch on YouTube; listen on Apple Podcasts or Spotify. Sponsors * WorkOS gives your product all the features that enterprise customers need, without derailing your roadmap. Skip months of engineering effort and start selling to enterprises today at workos.com. * Scale is building the infrastructure for smarter, safer AI. In addition to their Data Foundry, they recently released Scale Evaluation, a tool that diagnoses model limitations. Learn how Scale can help you push the frontier at scale.com/dwarkesh. * Gemini Live API lets you have natural, real-time, interactions with Gemini. You can talk to it like you were talking to another person, stream video to show it your surroundings, and share screen to give it context. Try it now by clicking the “Stream” tab on ai.dev. To sponsor a future episode, visit dwarkesh.com/advertise. Timestamps (00:00:00) – China is stagnating (00:25:46) – How the US broke Japan's economy (00:37:06) – America's inflation crisis is coming (01:02:20) – Will AGI solve the US deficit? (01:07:11) – Why interest rates will go up (01:10:55) – US equities will underperform (01:22:24) – The erosion of dollar dominance Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
- How Does Claude 4 Think? — Sholto Douglas & Trenton Bricken
New episode with my good friends Sholto Douglas & Trenton Bricken. Sholto focuses on scaling RL and Trenton researches mechanistic interpretability, both at Anthropic. We talk through what’s changed in the last year of AI research; the new RL regime and how far it can scale; how to trace a model’s thoughts; and how countries, workers, and students should prepare for AGI. See you next year for v3. Here’s last year’s episode, btw. Enjoy! Watch on YouTube; listen on Apple Podcasts or Spotify. ---------- SPONSORS * WorkOS ensures that AI companies like OpenAI and Anthropic don't have to spend engineering time building enterprise features like access controls or SSO. It’s not that they don't need these features; it's just that WorkOS gives them battle-tested APIs that they can use for auth, provisioning, and more. Start building today at workos.com. * Scale is building the infrastructure for safer, smarter AI. Scale’s Data Foundry gives major AI labs access to high-quality data to fuel post-training, while their public leaderboards help assess model capabilities. They also just released Scale Evaluation, a new tool that diagnoses model limitations. If you’re an AI researcher or engineer, learn how Scale can help you push the frontier at scale.com/dwarkesh. * Lighthouse is THE fastest immigration solution for the technology industry. They specialize in expert visas like the O-1A and EB-1A, and they’ve already helped companies like Cursor, Notion, and Replit navigate U.S. immigration. Explore which visa is right for you at lighthousehq.com/ref/Dwarkesh. To sponsor a future episode, visit dwarkesh.com/advertise. ---------- TIMESTAMPS (00:00:00) – How far can RL scale? (00:16:27) – Is continual learning a key bottleneck? (00:31:59) – Model self-awareness (00:50:32) – Taste and slop (01:00:51) – How soon to fully autonomous agents? (01:15:17) – Neuralese (01:18:55) – Inference compute will bottleneck AGI (01:23:01) – DeepSeek algorithmic improvements (01:37:42) – Why are LLMs ‘baby AGI’ but not AlphaZero? (01:45:38) – Mech interp (01:56:15) – How countries should prepare for AGI (02:10:26) – Automating white collar work (02:15:35) – Advice for students Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
- The Last Human CEO
Based on my essay about AI firms. Huge thanks to Petr and his team for bringing this to life! Watch on YouTube. Thanks to Google for sponsoring. We used their Veo 2 model to make this entire video—it generated everything from the photorealistic humans to the claymation octopuses. If you’re a Gemini Advanced user, you can try Veo 2 now in the Gemini app. Just select Veo 2 in the dropdown, and type your video idea in the prompt bar. Get started today by going to gemini.google.com. To sponsor a future episode, visit dwarkesh.com/advertise. Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
- Mark Zuckerberg — Meta's AGI Plan
Zuck on: * Llama 4, benchmark gaming * Intelligence explosion, business models for AGI * DeepSeek/China, export controls, & Trump * Orion glasses, AI relationships, and preventing reward-hacking from our tech. Watch on Youtube; listen on Apple Podcasts and Spotify. ---------- SPONSORS * Scale is building the infrastructure for safer, smarter AI. Scale’s Data Foundry gives major AI labs access to high-quality data to fuel post-training, while their public leaderboards help assess model capabilities. They also just released Scale Evaluation, a new tool that diagnoses model limitations. If you’re an AI researcher or engineer, learn how Scale can help you push the frontier at scale.com/dwarkesh. * WorkOS Radar protects your product against bots, fraud, and abuse. Radar uses 80+ signals to identify and block common threats and harmful behavior. Join companies like Cursor, Perplexity, and OpenAI that have eliminated costly free-tier abuse by visiting workos.com/radar. * Lambda is THE cloud for AI developers, with over 50,000 NVIDIA GPUs ready to go for startups, enterprises, and hyperscalers. By focusing exclusively on AI, Lambda provides cost-effective compute supported by true experts, including a serverless API serving top open-source models like Llama 4 or DeepSeek V3-0324 without rate limits, and available for a free trial at lambda.ai/dwarkesh. To sponsor a future episode, visit dwarkesh.com/p/advertise. ---------- TIMESTAMPS (00:00:00) – How Llama 4 compares to other models (00:11:34) – Intelligence explosion (00:26:36) – AI friends, therapists & girlfriends (00:35:10) – DeepSeek & China (00:39:49) – Open source AI (00:54:15) – Monetizing AGI (00:58:32) – The role of a CEO (01:02:04) – Is big tech aligning with Trump? (01:07:10) – 100x productivity Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
- Why Rome Actually Fell: Plagues, Slavery, & Ice Age — Kyle Harper
800 years before the Black Death, the very same bacteria ravaged Rome, killing 60%+ of the population in many areas. Also, back-to-back volcanic eruptions caused a mini Ice Age, leaving Rome devastated by famine and disease. I chatted with historian Kyle Harper about this and much else: * Rome as a massive slave society * Why humans are more disease-prone than other animals * How agriculture made us physically smaller (Caesar at 5'5" was considered tall) Watch on Youtube; listen on Apple Podcasts or Spotify. ---------- SPONSORS * WorkOS makes it easy to become enterprise-ready. They have APIs for all the most common enterprise requirements—things like authentication, permissions, and encryption—so you can quickly plug them in and get back to building your core product. If you want to make your product enterprise-ready, join companies like Cursor, Perplexity and OpenAI, and head to workos.com. * Scale’s Data Foundry gives major AI labs access to high-quality data to fuel post-training, including advanced reasoning capabilities. If you’re an AI researcher or engineer, learn how Scale’s Data Foundry and research lab, SEAL, can help you go beyond the current frontier of capabilities at scale.com/dwarkesh To sponsor a future episode, visit dwarkesh.com/advertise. ---------- KYLE'S BOOKS * The Fate of Rome: Climate, Disease, and the End of an Empire * Plagues upon the Earth: Disease and the Course of Human History * Slavery in the Late Roman World, AD 275-425 ---------- TIMESTAMPS (00:00:00) - Plague's impact on Rome's collapse (00:06:24) - Rome's little Ice Age (00:11:51) - Why did progress stall in Rome's Golden Age? (00:23:55) - Slavery in Rome (00:36:22) - Was agriculture a mistake? (00:47:42) - Disease's impact on cognitive function (00:59:46) - Plague in India and Central Asia (01:05:16) - The next pandemic (01:16:48) - How Kyle uses LLMs (01:18:51) - De-extinction of lost species Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
- AGI is Still 30 Years Away — Ege Erdil & Tamay Besiroglu
Ege Erdil and Tamay Besiroglu have 2045+ timelines, think the whole "alignment" framing is wrong, don't think an intelligence explosion is plausible, but are convinced we'll see explosive economic growth (economy literally doubling every year or two). This discussion offers a totally different scenario than my recent interview with Scott and Daniel. Ege and Tamay are the co-founders of Mechanize (disclosure - I’m an angel investor), a startup dedicated to fully automating work. Before founding Mechanize, Ege and Tamay worked on AI forecasts at Epoch AI. Watch on Youtube; listen on Apple Podcasts or Spotify. ---------- Sponsors * WorkOS makes it easy to become enterprise-ready. With simple APIs for essential enterprise features like SSO and SCIM, WorkOS helps companies like Vercel, Plaid, and OpenAI meet the requirements of their biggest customers. To learn more about how they can help you do the same, visit workos.com * Scale’s Data Foundry gives major AI labs access to high-quality data to fuel post-training, including advanced reasoning capabilities. If you’re an AI researcher or engineer, learn about how Scale’s Data Foundry and research lab, SEAL, can help you go beyond the current frontier at scale.com/dwarkesh * Google's Gemini Pro 2.5 is the model we use the most at Dwarkesh Podcast: it helps us generate transcripts, identify interesting clips, and code up new tools. If you want to try it for yourself, it's now available in Preview with higher rate limits! Start building with it today at aistudio.google.com. ---------- Timestamps (00:00:00) - AGI will take another 3 decades (00:22:27) - Even reasoning models lack animal intelligence (00:45:04) - Intelligence explosion (01:00:57) - Ege & Tamay’s story (01:06:24) - Explosive economic growth (01:33:00) - Will there be a separate AI economy? (01:47:08) - Can we predictably influence the future? (02:19:48) - Arms race dynamic (02:29:48) - Is superintelligence a real thing? (02:35:45) - Reasons not to expect explosive growth (02:49:00) - Fully automated firms (02:54:43) - Will central planning work after AGI? (02:58:20) - Career advice Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
- AMA ft. Sholto & Trenton: New Book, Career Advice Given AGI, How I'd Start From Scratch
I recorded an AMA! I had a blast chatting with my friends Trenton Bricken and Sholto Douglas. We discussed my new book, career advice given AGI, how I pick guests, how I research for the show, and some other nonsense. My book, “The Scaling Era: An Oral History of AI, 2019-2025” is available in digital format now. Preorders for the print version are also open! Watch on YouTube; listen on Apple Podcasts or Spotify. Timestamps (0:00:00) - Book launch announcement (0:04:57) - AI models not making connections across fields (0:10:52) - Career advice given AGI (0:15:20) - Guest selection criteria (0:17:19) - Choosing to pursue the podcast long-term (0:25:12) - Reading habits (0:31:10) - Beard deepdive (0:33:02) - Who is best suited for running an AI lab? (0:35:16) - Preparing for fast AGI timelines (0:40:50) - Growing the podcast Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
- Joseph Henrich – Why Humans Survived and Smarter Species Didn't
Humans have not succeeded because of our raw intelligence. Marooned European explorers regularly starved to death in areas where foragers thrived for 1000s of years. I’ve always found this cultural evolution deeply mysterious. How do you discover the 10 steps for processing cassava so it won’t give you cyanide poisoning simply by trial and error? Has the human brain declined in size over the last 10,000 years because we outsourced cultural evolution to a larger collective brain? The most interesting part of the podcast is Henrich’s explanation of how the Catholic Church unintentionally instigated the Industrial Revolution through the dismantling of intensive kinship systems in medieval Europe. Watch on Youtube; listen on Apple Podcasts or Spotify. ---------- Sponsors Scale partners with major AI labs like Meta, Google Deepmind, and OpenAI. Through Scale’s Data Foundry, labs get access to high-quality data to fuel post-training, including advanced reasoning capabilities. If you’re an AI researcher or engineer, learn about how Scale’s Data Foundry and research lab, SEAL, can help you go beyond the current frontier at scale.com/dwarkesh. To sponsor a future episode, visit dwarkesh.com/p/advertise. ---------- Joseph’s books The WEIRDest People in the World The Secret of Our Success ---------- Timestamps (0:00:00) - Humans didn’t succeed because of raw IQ (0:09:27) - How cultural evolution works (0:20:48) - Why is human brain size declining? (0:32:00) - Will AGI have superhuman cultural learning? (0:42:34) - Why Industrial Revolution happened in Europe (0:55:30) - Why China, Rome, India got left behind (1:21:09) - Loss of cultural variance in modern world (1:31:20) - Is individual genius real? (1:43:49) - IQ and collective brains Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
- Notes on China
I’m so excited with how this visualization of Notes on China turned out. Petr, thank you for such beautiful watercolor artwork. More to come! Watch on YouTube. ---------- Timestamps (0:00:00) - Intro (0:00:32) - Scale (0:05:50) - Vibes (0:11:14) - Youngsters (0:14:27) - Tech & AI (0:15:47) - Hearts & Minds (0:17:07) - On Travel Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
- Satya Nadella – Microsoft’s AGI Plan & Quantum Breakthrough
Satya Nadella on: Why he doesn’t believe in AGI but does believe in 10% economic growth; Microsoft’s new topological qubit breakthrough and gaming world models; Whether Office commoditizes LLMs or the other way around. Watch on Youtube; listen on Apple Podcasts or Spotify. ---------- Sponsors Scale partners with major AI labs like Meta, Google Deepmind, and OpenAI. Through Scale’s Data Foundry, labs get access to high-quality data to fuel post-training, including advanced reasoning capabilities. If you’re an AI researcher or engineer, learn about how Scale’s Data Foundry and research lab, SEAL, can help you go beyond the current frontier at scale.com/dwarkesh Linear's project management tools have become the default choice for product teams at companies like Ramp, CashApp, OpenAI, and Scale. These teams use Linear so they can stay close to their products and move fast. If you’re curious why so many companies are making the switch, visit linear.app/dwarkesh To sponsor a future episode, visit dwarkeshpatel.com/p/advertise. ---------- Timestamps (0:00:00) - Intro (0:05:04) - AI won't be winner-take-all (0:15:18) - World economy growing by 10% (0:21:39) - Decreasing price of intelligence (0:30:19) - Quantum breakthrough (0:42:51) - How Muse will change gaming (0:49:51) - Legal barriers to AI (0:55:46) - Getting AGI safety right (1:04:59) - 34 years at Microsoft (1:10:46) - Does Satya Nadella believe in AGI? Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
- Jeff Dean & Noam Shazeer – 25 years at Google: from PageRank to AGI
This week I welcome on the show two of the most important technologists ever, in any field. Jeff Dean is Google's Chief Scientist, and through 25 years at the company, has worked on basically the most transformative systems in modern computing: from MapReduce, BigTable, Tensorflow, AlphaChip, to Gemini. Noam Shazeer invented or co-invented all the main architectures and techniques that are used for modern LLMs: from the Transformer itself, to Mixture of Experts, to Mesh Tensorflow, to Gemini and many other things. We talk about their 25 years at Google, going from PageRank to MapReduce to the Transformer to MoEs to AlphaChip – and maybe soon to ASI. My favorite part was Jeff's vision for Pathways, Google’s grand plan for a mutually-reinforcing loop of hardware and algorithmic design and for going past autoregression. That culminates in us imagining *all* of Google-the-company, going through one huge MoE model. And Noam just bites every bullet: 100x world GDP soon; let’s get a million automated researchers running in the Google datacenter; living to see the year 3000.Watch on Youtube; listen on Apple Podcasts or Spotify. Sponsors Scale partners with major AI labs like Meta, Google Deepmind, and OpenAI. Through Scale’s Data Foundry, labs get access to high-quality data to fuel post-training, including advanced reasoning capabilities. If you’re an AI researcher or engineer, learn about how Scale’s Data Foundry and research lab, SEAL, can help you go beyond the current frontier at scale.com/dwarkesh Curious how Jane Street teaches their new traders? They use Figgie, a rapid-fire card game that simulates the most exciting parts of markets and trading. It’s become so popular that Jane Street hosts an inter-office Figgie championship every year. Download from the app store or play on your desktop at figgie.com Meter wants to radically improve the digital world we take for granted. They’re developing a foundation model that automates network management end-to-end. To do this, they just announced a long-term partnership with Microsoft for tens of thousands of GPUs, and they’re recruiting a world class AI research team. To learn more, go to meter.com/dwarkesh To sponsor a future episode, visit dwarkeshpatel.com/p/advertise Timestamps 00:00:00 - Intro 00:02:44 - Joining Google in 1999 00:05:36 - Future of Moore's Law 00:10:21 - Future TPUs 00:13:13 - Jeff’s undergrad thesis: parallel backprop 00:15:10 - LLMs in 2007 00:23:07 - “Holy s**t” moments 00:29:46 - AI fulfills Google’s original mission 00:34:19 - Doing Search in-context 00:38:32 - The internal coding model 00:39:49 - What will 2027 models do? 00:46:00 - A new architecture every day? 00:49:21 - Automated chip design and intelligence explosion 00:57:31 - Future of inference scaling 01:03:56 - Already doing multi-datacenter runs 01:22:33 - Debugging at scale 01:26:05 - Fast takeoff and superalignment 01:34:40 - A million evil Jeff Deans 01:38:16 - Fun times at Google 01:41:50 - World compute demand in 2030 01:48:21 - Getting back to modularity 01:59:13 - Keeping a giga-MoE in-memory 02:04:09 - All of Google in one model 02:12:43 - What’s missing from distillation 02:18:03 - Open research, pros and cons 02:24:54 - Going the distance Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
- Sarah Paine Episode 2: Why Japan Lost (Lecture & Interview)
This is the second episode in the trilogy of a lectures by Professor Sarah Paine of the Naval War College. In this second episode, Prof Paine dissects the ideas and economics behind Japanese imperialism before and during WWII. We get into the oil shortage which caused the war; the unique culture of honor and death; the surprisingly chaotic chain of command. This is followed by a Q&A with me. Huge thanks to Substack for hosting this event! Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Sponsor Today’s episode is brought to you by Scale AI. Scale partners with the U.S. government to fuel America’s AI advantage through their data foundry. Scale recently introduced Defense Llama, Scale's latest solution available for military personnel. With Defense Llama, military personnel can harness the power of AI to plan military or intelligence operations and understand adversary vulnerabilities. If you’re interested in learning more on how Scale powers frontier AI capabilities, go to scale.com/dwarkesh. Buy Sarah's Books! I highly, highly recommend both "The Wars for Asia, 1911–1949" and "The Japanese Empire: Grand Strategy from the Meiji Restoration to the Pacific War". Timestamps (0:00:00) - Lecture begins (0:06:58) - The code of the samurai (0:10:45) - Buddhism, Shinto, Confucianism (0:16:52) - Bushido as bad strategy (0:23:34) - Military theorists (0:33:42) - Strategic sins of omission (0:38:10) - Crippled logistics (0:40:58) - the Kwantung Army (0:43:31) - Inter-service communication (0:51:15) - Shattering Japanese morale (0:57:35) - Q&A begins (01:05:02) - Unusual brutality of WWII (01:11:30) - Embargo caused the war (01:16:48) - The liberation of China (01:22:02) - Could US have prevented war? (01:25:30) - Counterfactuals in history (01:27:46) - Japanese optimism (01:30:46) - Tech change and social change (01:38:22) - Hamming questions (01:44:31) - Do sanctions work? (01:50:07) - Backloaded mass death (01:54:09) - demilitarizing Japan (01:57:30) - Post-war alliances (02:03:46) - Inter-service rivalry Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
- Tyler Cowen - the #1 bottleneck to AI progress is humans
I interviewed Tyler Cowen at the Progress Conference 2024. As always, I had a blast. This is my fourth interview with him – and yet I’m always hearing new stuff. We talked about why he thinks AI won't drive explosive economic growth, the real bottlenecks on world progress, him now writing for AIs instead of humans, and the difficult relationship between being cultured and fostering growth – among many other things in the full episode. Thanks to the Roots of Progress Institute (with special thanks to Jason Crawford and Heike Larson) for such a wonderful conference, and to FreeThink for the videography. Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Sponsors I’m grateful to Tyler for volunteering to say a few words about Jane Street. It's the first time that a guest has participated in the sponsorship. I hope you can see why Tyler and I think so highly of Jane Street. To learn more about their open rules, go to janestreet.com/dwarkesh. Timestamps (00:00:00) Economic Growth and AI (00:14:57) Founder Mode and increasing variance (00:29:31) Effective Altruism and Progress Studies (00:33:05) What AI changes for Tyler (00:44:57) The slow diffusion of innovation (00:49:53) Stalin's library (00:52:19) DC vs SF vs EU Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
- Adam Brown – How Future Civilizations Could Change The Laws of Physics
Adam Brown is a founder and lead of BlueShift with is cracking maths and reasoning at Google DeepMind and a theoretical physicist at Stanford. We discuss: destroying the light cone with vacuum decay, holographic principle, mining black holes, & what it would take to train LLMs that can make Einstein level conceptual breakthroughs. Stupefying, entertaining, & terrifying. Enjoy! Watch on YouTube, read the transcript, listen on Apple Podcasts, Spotify, or your favorite platform. Sponsors - Deepmind, Meta, Anthropic, and OpenAI, partner with Scale for high quality data to fuel post-training Publicly available data is running out - to keep developing smarter and smarter models, labs will need to rely on Scale’s data foundry, which combines subject matter experts with AI models to generate fresh data and break through the data wall. Learn more at scale.ai/dwarkesh. - Jane Street is looking to hire their next generation of leaders. Their deep learning team is looking for ML researchers, FPGA programmers, and CUDA programmers. Summer internships are open for just a few more weeks. If you want to stand out, take a crack at their new Kaggle competition. To learn more, go to janestreet.com/dwarkesh. - This episode is brought to you by Stripe, financial infrastructure for the internet. Millions of companies from Anthropic to Amazon use Stripe to accept payments, automate financial processes and grow their revenue. Timestamps (00:00:00) - Changing the laws of physics (00:26:05) - Why is our universe the way it is (00:37:30) - Making Einstein level AGI (01:00:31) - Physics stagnation and particle colliders (01:11:10) - Hitchhiking (01:29:00) - Nagasaki (01:36:19) - Adam’s career (01:43:25) - Mining black holes (01:59:42) - The holographic principle (02:23:25) - Philosophy of infinities (02:31:42) - Engineering constraints for future civilizations Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
- Gwern Branwen - How an Anonymous Researcher Predicted AI's Trajectory
Gwern is a pseudonymous researcher and writer. He was one of the first people to see LLM scaling coming. If you've read his blog, you know he's one of the most interesting polymathic thinkers alive. In order to protect Gwern's anonymity, I proposed interviewing him in person, and having my friend Chris Painter voice over his words after. This amused him enough that he agreed. After the episode, I convinced Gwern to create a donation page where people can help sustain what he's up to. Please go here to contribute. Read the full transcript here. Sponsors: * Jane Street is looking to hire their next generation of leaders. Their deep learning team is looking for ML researchers, FPGA programmers, and CUDA programmers. Summer internships are open - if you want to stand out, take a crack at their new Kaggle competition. To learn more, go to janestreet.com/dwarkesh. * Turing provides complete post-training services for leading AI labs like OpenAI, Anthropic, Meta, and Gemini. They specialize in model evaluation, SFT, RLHF, and DPO to enhance models’ reasoning, coding, and multimodal capabilities. Learn more atturing.com/dwarkesh. * This episode is brought to you by Stripe, financial infrastructure for the internet. Millions of companies from Anthropic to Amazon use Stripe to accept payments, automate financial processes and grow their revenue. If you’re interested in advertising on the podcast, check out this page. Timestamps 00:00:00 - Anonymity 00:01:09 - Automating Steve Jobs 00:04:38 - Isaac Newton's theory of progress 00:06:36 - Grand theory of intelligence 00:10:39 - Seeing scaling early 00:21:04 - AGI Timelines 00:22:54 - What to do in remaining 3 years until AGI 00:26:29 - Influencing the shoggoth with writing 00:30:50 - Human vs artificial intelligence 00:33:52 - Rabbit holes 00:38:48 - Hearing impairment 00:43:00 - Wikipedia editing 00:47:43 - Gwern.net 00:50:20 - Counterfactual careers 00:54:30 - Borges & literature 01:01:32 - Gwern's intelligence and process 01:11:03 - A day in the life of Gwern 01:19:16 - Gwern's finances 01:25:05 - The diversity of AI minds 01:27:24 - GLP drugs and obesity 01:31:08 - Drug experimentation 01:33:40 - Parasocial relationships 01:35:23 - Open rabbit holes Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe