Artificial super-intelligence probably leads to extinction
Artificial super-intelligence probably leads to extinction

www.lesswrong.com
The Problem — LessWrong

There isn’t a ceiling at human-level capabilities. ASI is very likely to exhibit goal-oriented behavior. ASI is very likely to pursue the wrong goals. It would be lethally dangerous to build ASIs that have the wrong goals. Catastrophe can be averted via a sufficiently aggressive policy response.