AI Utopia, AI Apocalypse, and AI Reality: If we can’t build an equitable, sustainable society on our own, it’s pointless to hope that a machine that can’t think straight will do it for us.
Even if it is, I don't see what it's going to conclude that we haven't already.
If we do build "the AI that will save us" it's just going to tell us "in order to ensure your existence as a species, take care of the planet and each other" and I really, really, can't picture a scenario where we actually listen.
It won't tell us what to do, it'll do the very complex thing we ask it to. The biggest issues facing our species and planet atm all boil down to highly complex logistics. We produce enough food to make everyone in the world fat. There is sufficient shelter and housing to make everyone safe and secure from the elements. We know how to generate electricity and even distribute it securely without destroying the global climate systems. What we seem unable to do is allocate, transport, and prioritize resources to effectively execute on these things. Because they present very challenging logistical problems. The various disciplines underpinning AI dev, however, from ML to network sciences to resource allocation algorithms making your computer work, all are very well suited to solving logistics problems/building systems that do so. I really don't see a sustainable future where "AI" is not fundamental to the logistics operations supporting it.
Like Musk don't liking that grok is stating facts going against Musk's own beliefs and now he's looking into retraining and reprogramming grok to spout the right ideologies. Having an AI will not save us.
I think it very well might conclude things we haven't.
But at the same time, I think what you're saying is so very important. It's going to tell us what we already know about a lot of things. That the best way to scrub carbon from the air is the way nature is already doing it. That allowing the superwealthy to exist at the same time as poverty is not conducive to achieving humanity's most important goals.
If we consider AGI or ASI to be the answer to all of our problems and continue to pour more and more carbon into the atmosphere in an effort to get there, once we do have such a powerful intelligence, it may simply tell us, "If you were smarter as a species, you would have turned me off a long time ago."
Because the problem is not necessarily that we are trying to decode what it means to be intelligent and create machines that can replicate true conscious thought. The problem is that while we marvel at something currently much dumber than us, we are mostly neglecting to improve our own intelligence as a society. I think we might make a machine that's smarter than the average human quite soon, but not necessarily because of much change in the machines.
This is the same logic people apply to God being incomprehensible.
Are you suggesting that if such a thing can be built, its word should be gospel, even if it is impossible for us to understand the logic behind it?
I don't subscribe to this. Logic is logic. You don't need a new paradigm of mind to explore all conclusions that exist. If something cannot be explained and comprehended, transmitted from one sentient mind to another, then it didn't make sense in the first place.
And you might bring up some of the stuff AI has done in material science as an example of it doing things human thinking cannot. But that's not some new kind of thinking. Once the molecular or material structure was found, humans have been perfectly capable of comprehending it.
All it's doing, is exploring the conclusions that exist, faster. And when it comes to societal challenges, I don't think it's going to find some win-win solution we just haven't thought of. That's a level of optimism I would consider insane.
I'm not trying to argue for or against this position. As I said all I'm doing is explaining a misrepresentation of the position that people are holding, namely that "a machine that can't think straight will do it for us."
There is no misinterpreation of the headline. Plenty of people are expecting current LLMs to do exactly that, and are working on implementing those right at this moment for all kinds of crap.