Artificial super-intelligence probably leads to extinction
Artificial super-intelligence probably leads to extinction

www.lesswrong.com
The Problem — LessWrong

There isn’t a ceiling at human-level capabilities. ASI is very likely to exhibit goal-oriented behavior. ASI is very likely to pursue the wrong goals. It would be lethally dangerous to build ASIs that have the wrong goals. Catastrophe can be averted via a sufficiently aggressive policy response.
coined by a tech company CEO, not even that relevant. its like a layperson coining diseases that dont exist in any medical journal or even in the industry.