Game over. AGI is not imminent, and LLMs are not the royal road to getting there.
Game over. AGI is not imminent, and LLMs are not the royal road to getting there.
garymarcus.substack.com
Game over. AGI is not imminent, and LLMs are not the royal road to getting there.

An asteroid impact not being imminent doesn’t really make me feel any better when the asteroid is still hurtling toward us. My concern about AGI has never been about the timescale - it’s the fact that we know it’s coming, and almost no one seems to take the repercussions seriously.
equally, AGI could be as baked-in a limitation as exceeding the speed of light or time travel - any model that includes it has to put a lot into fictional solutions that have no bearing on current reality.
General intelligence isn't a theoretical concept though. Human brain can do it quite efficiently.
LLMs are a dead end to AGI. They do not reason or understand in any way. They only mimic it.
It is the same technology now as 20 years ago with the first chatbots. Just LLMs have models approaching a Trillion items instead of a few thousand.
I haven't said a word about LLMs.
At the risk of sounding like I've been living under a rock, how do we know it's coming, exactly?
We’ll keep incrementally improving our technology, and unless we - or some outside force - destroy us first, we’ll get there eventually.
We already know that general intelligence is possible, because humans are generally intelligent. There’s no reason to assume that what our brains do couldn’t be replicated artificially.
At some point, unless something stops us, we’ll create an artificially intelligent system that’s as intelligent as we are. From that moment on, we’re no longer needed to improve it further - it will make a better version of itself, which will make an even better version, and so on. Eventually, we’ll find ourselves in the presence of something vastly more intelligent than us - and the idea of “outsmarting” it becomes completely incoherent. That’s an insanely dangerous place for humanity to end up in.
We're growing a tiger puppy. It's still small and cute today but it's only a matter of time untill it gets big and strong.
well we often equate predictions around AGI with ASI and a singularity event, which has been predicted for decades based on several aspects of computing over the years; advancing hardware, software, throughput and then of course neuroscience.
ASI is more of a prediction of the capabilities where even imitating intelligence with enough presence will give rise to tangible, real higher intelligence after a few iterations, then doing so on its own then doing improvements. once those improvements are beyond human capability, we have our singularity.
back to just AGI, it seems to be achievable based on mimicking the processing power of a human mind, which isn't currently possible, but we are steadily working toward it and have achieved some measures of success. we may decide that certain aspects of artifical intelligence are reached prior to that, but IMO it feels like we're only a few years away.
Intelligence is possible, as proven by the existence of it in the Biological world.
So it makes sense that as Technology evolves we become able to emulate the Biological World in that, just as we have in so many other things, from flight to artificial hearths.
However, there is no guarantee that Mankind will not go extinct before that point is reached, nor there is any guarantee that our Technological progression won't come to an end (though at the moment we're near a peak period in terms of speed of Technological progression), so it is indeed true that we don't know it's coming: we as a species might not be around long enough to make it come or we might high a ceiling in our Technological development before our technology is capable of creating AGI.
Beyond the "maybe one day" view, personally I think that believing that AGI is close is complete total pie in the sky fantasism: this supposed path to it that were LLMs turned out to be a dead end that was decorated with a lot of bullshit to make it seem otherwise, what the technology underlying it does really well - pattern recognition and reproduction - has turned out to not be enough by itself to add up to intelligence and we don't actually have any specific technological direction in the pipeline (that I know of) which can crack that problem.
we don't know it's coming. What leads you to believe that? the countless times that they promised thea'd fixed their problems which invariably turned out to be bullshit?
Keep reading further if you're truly interested in why I think that.
Yes, and there is also the possibility that it could be upon us quite suddenly. It may just take one fundamental breakthrough to make the leap from what we have currently to AGI, and once that breakthrough is achieved, AGI could arrive quite quickly. It may not be a linear process of improvement, where we reach the summit in many years.
Greed blinds all