#368 – Eliezer Yudkowsky: Dangers of AI and the End of Human CivilizationLex Fridman Podcast

#368 – Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

203分钟 ·
播放数50
·
评论数1

Eliezer Yudkowsky is a researcher, writer, and philosopher on the topic of superintelligent AI. Please support this podcast by checking out our sponsors:
Linode: linode.com to get $100 free credit
House of Macadamias: houseofmacadamias.com and use code LEX to get 20% off your first order
InsideTracker: insidetracker.com to get 20% off

EPISODE LINKS:
Eliezer’s Twitter: twitter.com
LessWrong Blog: lesswrong.com
Eliezer’s Blog page: www.lesswrong.com
Books and resources mentioned:
1. AGI Ruin (blog post): lesswrong.com
2. Adaptation and Natural Selection: amzn.to

PODCAST INFO:
Podcast website: lexfridman.com
Apple Podcasts: apple.co
Spotify: spoti.fi
RSS: https://lexfridman.com/feed/podcast/
YouTube Full Episodes: youtube.com
YouTube Clips: youtube.com

SUPPORT & CONNECT:
– Check out the sponsors above, it’s the best way to support this podcast
– Support on Patreon: www.patreon.com
– Twitter: twitter.com
– Instagram: www.instagram.com
– LinkedIn: www.linkedin.com
– Facebook: www.facebook.com
– Medium: medium.com

OUTLINE:
Here’s the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time.
(00:00) – Introduction
(05:19) – GPT-4
(28:00) – Open sourcing GPT-4
(44:18) – Defining AGI
(52:14) – AGI alignment
(1:35:06) – How AGI may kill us
(2:27:27) – Superintelligence
(2:34:39) – Evolution
(2:41:09) – Consciousness
(2:51:41) – Aliens
(2:57:12) – AGI Timeline
(3:05:11) – Ego
(3:11:03) – Advice for young people
(3:16:21) – Mortality
(3:18:02) – Love

展开Show Notes