#431 – Roman Yampolskiy: Dangers of Superintelligent AILex Fridman Podcast

#431 – Roman Yampolskiy: Dangers of Superintelligent AI

NaN分钟 ·
播放数404
·
评论数0

Roman Yampolskiy is an AI safety researcher and author of a new book titled AI: Unexplainable, Unpredictable, Uncontrollable. Please support this podcast by checking out our sponsors:
Yahoo Finance: yahoofinance.com
MasterClass: masterclass.com to get 15% off
NetSuite: netsuite.com to get free product tour
LMNT: drinkLMNT.com to get free sample pack
Eight Sleep: eightsleep.com to get $350 off

Transcript: lexfridman.com

EPISODE LINKS:
Roman’s X: twitter.com
Roman’s Website: cecs.louisville.edu
Roman’s AI book: amzn.to

PODCAST INFO:
Podcast website: lexfridman.com
Apple Podcasts: apple.co
Spotify: spoti.fi
RSS: https://lexfridman.com/feed/podcast/
YouTube Full Episodes: youtube.com
YouTube Clips: youtube.com

SUPPORT & CONNECT:
– Check out the sponsors above, it’s the best way to support this podcast
– Support on Patreon: www.patreon.com
– Twitter: twitter.com
– Instagram: www.instagram.com
– LinkedIn: www.linkedin.com
– Facebook: www.facebook.com
– Medium: medium.com

OUTLINE:
Here’s the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time.
(00:00) – Introduction
(09:12) – Existential risk of AGI
(15:25) – Ikigai risk
(23:37) – Suffering risk
(27:12) – Timeline to AGI
(31:44) – AGI turing test
(37:06) – Yann LeCun and open source AI
(49:58) – AI control
(52:26) – Social engineering
(54:59) – Fearmongering
(1:04:49) – AI deception
(1:11:23) – Verification
(1:18:22) – Self-improving AI
(1:30:34) – Pausing AI development
(1:36:51) – AI Safety
(1:46:35) – Current AI
(1:51:58) – Simulation
(1:59:16) – Aliens
(2:00:50) – Human mind
(2:07:10) – Neuralink
(2:16:15) – Hope for the future
(2:20:11) – Meaning of life