I don't think we will be able to achieve AGI with anything other than an absolute accident. We don't understand our own brains enough to create one from scratch.
It won't happen while I'm alive. Current LLMs are basically parrots with a lot of experience, and will never get close to AGI. We're no closer today than when a computer first passed the Turing test in the 60s.
I'm more worried about jobs getting nuked no matter whatever AGI turns out to be. It can be vapourware and still the capitalist cult will sacrifice labour on that altar.
I don't see any reason to believe anything currently being done is a direct path to AGI. Sam Altman and Dario Amodei are straight up liars and the fact so many people lap up their shameless hype marketing is just sad.
I think it is inevitable. The main flaw I see from a lay perspective in current methodology is trying to make one neural network that does everything. Our own brains are composed of multiple neural networks with different jobs interacting with each other, so I assume that AGI will require this approach.
For example: we are currently struggling with LLM hallucinations. What could reduce this? A separate fact-checking neural network.
Please keep in mind that my opinion is almost worthless, but you asked.
The computer doesn't even understand things nor asks questions unprompted. I don't think people understand that it doesn't understand, lol. Intelligence seems to be non-computational!
It may or may not happen. What I do know is that it will never spontaneously arrise from an LLM, no matter how much data they dump into it or how many tons of potable water they carelessly waste.
As others have said, it AGI won't be from LLMs. AGI is their current buzzword to hype stocks.
If they declare theyve 'reached' AGI when you read the frine print it will be an arbitrary measure
In a single person's lifetime, we went from not flying to landing on the moon. We absolutely can produce AGI in most of our lifetimes. I predict within 15-20 years, we will have a functioning AGI. It may also need to coincide with actually figuring out quantum computing just for sheer computational needs.
This all hinges on if investments in AI continue at its current pace, which we already see cracks in though.
I agree with most of the other comments here. Is actual AGI something to be worried about? I'm not sure. I don't know if it's even possible on our current technology path.
Based on what I know, it's almost certainly not going to come from the current crop of LLMs and related research. Despite many claims, they don't actually think or reason. They're just really complicated statistical models. And while they can do some interesting and impressive things, I don't think there is any path of progression that will make them jump beyond what they currently are to actual intelligence.
Could we develop something in my lifetime (the next 50-ish years or so for me)? Maybe. I think slim chances without a major shift, and I think it would take a public effort akin to the Manhattan Project and the Internet to achieve, but it's possible. In the next 5 years? Definitely not, some random, massive, lucky break notwithstanding.
As others have said here, even without AGI, current capitalist practices are already using the limited capabilities of LLMs to upend the labor market and put lots of people out of a job. Even when the LLMs can't really replace the people effectively. But that's not a problem with AI, it's a problem with capitalism that happens with any kind of advancement. They'll take literally any excuse to extract extra value.
In summary, I wouldn't worry about AGI. There's so many other things that are problems now, and are already existential threats, that worrying about this big old "maybe in 50 years" isn't really worth your time and energy.
Not happening IMO. Though its important to distinguish that the general public and business sentiment act as if LLMs are already some kinda legitimate intelligence. So I think a pretty ugly acceptance and hard dependence on these technologies in the form of altering our public infrastructure and destroying the planet will lead to some hellscapien future for sure.... All the stuff you mentioned and more. All without even reaching this level of AGI as it is understood currently.
Who knows if AGI is possible maybe it wouldn't cause the future you described in post but instead help us avoid this nonsense road we are on now.