"The technology we're building today is not sufficient to get there," said Nick Frosst, a founder of the AI startup Cohere who previously worked as a researcher at Google and studied under the most revered AI researcher of the last 50 years. "What we are building now are things that take in words and predict the next most likely word, or they take in pixels and predict the next most likely pixel. That's very different from what you and I do." In a recent survey of the Association for the Advancement of Artificial Intelligence, a 40-year-old academic society that includes some of the most respected researchers in the field, more than three-quarters of respondents said the methods used to build today's technology were unlikely to lead to AGI.
Opinions differ in part because scientists cannot even agree on a way of defining human intelligence, arguing endlessly over the merits and flaws of IQ tests and other benchmarks. Comparing our own brains to machines is even more subjective. This means that identifying AGI is essentially a matter of opinion.... And scientists have no hard evidence that today's technologies are capable of performing even some of the simpler things the brain can do, like recognizing irony or feeling empathy. Claims of AGI's imminent arrival are based on statistical extrapolations — and wishful thinking. According to various benchmark tests, today's technologies are improving at a consistent rate in some notable areas, like math and computer programming. But these tests describe only a small part of what people can do.
Humans know how to deal with a chaotic and constantly changing world. Machines struggle to master the unexpected — the challenges, small and large, that do not look like what has happened in the past. Humans can dream up ideas that the world has never seen. Machines typically repeat or enhance what they have seen before. That is why Frosst and other sceptics say pushing machines to human-level intelligence will require at least one big idea that the world's technologists have not yet dreamed up. There is no way of knowing how long that will take. "A system that's better than humans in one way will not necessarily be better in other ways," Harvard University cognitive scientist Steven Pinker said. "There's just no such thing as an automatic, omniscient, omnipotent solver of every problem, including ones we haven't even thought of yet. There's a temptation to engage in a kind of magical thinking. But these systems are not miracles. They are very impressive gadgets."
"What we are building now are things that take in words and predict the next most likely word..."
This is a gross oversimplification and doesn't reflect current understanding of how the most advanced LLMs work. Anthropic has recently published papers showing that Claude "sometimes thinks in a conceptual space" and will "plan what it says many words ahead".
This doesn't seem quite so different from human intelligence as what the summary suggests.
The funniest shit about AI is that even if the current line of research was promising (it isn't beyond specific domain use cases), the ecological holocaust of the environment being caused by AI datacenters will destroy this planet as a habitable place WELL before we develop an artificial intelligence.
This is all so pointlessly dumb, on so many levels
OpenAI has contractually defined the development of AGI using a metric of chatgpt sales numbers so get ready for them to claim they've developed AGI even though they never will.
Yup, and that’s just one of many things that make me confident in my impulse to never trust OpenAI or any company that is just so obviously a money-grabbing grift.
[OpenAI and Microsoft] came to agree in 2023 that AGI will be achieved once OpenAI has developed an AI system that can generate at least $100 billion in profits. Source.
I am not sure I believe in AGI. Like it will never exist, because it can't. I could be wrong. Hell, I'm often wrong, I just don't think a machine will ever be anything but a machine.
How would you define AGI then? If the definition is just "intelligence" then I would say we are already there. I think the concept is infinitely complex and our human understanding may never totally get there. Again, I could be wrong. Technology changes. People said man could never fly too.