AGI can't come from these LLMs because they are non-sensing, stationary, and fundamentally not thinking at all.
AGI might be coming down the pipe, but not from these LLM vendors. I hope a player like Numenta, or any other nonprofit, open-source initiative manages to create AGI so that it can be a positive force in the world, rather than a corporate upward wealth transfer like most tech.
It is like raising a baby.
Machine Learning is like teaching computers how to learn new things.
Large Language Models is like teaching computers how to speak.
On its own it won't do much, it is only a small step. But a necessary one to reach the fully formed being that is an AGI.
(these are analogies, I don't believe in fully sentient AI yet)
Intelligence and consciousness are not related in the way you seem to think.
We've always known that you can have consciousness without a high level of intelligence (think of children, people with certain types of brain damage), and now for the first time, LLMs show us that you can have intelligence without consciousness.
It's naive to think that as we continue to develop intelligent machines, suddenly one of them will become conscious once it reaches a particular level of intelligence. Did you suddenly become conscious once you hit the age of 14 or whatever and had finally developed a deep enough understanding of trigonometry or a solid enough grasp of the works of Mark Twain? No of course not, you became conscious at a very early age, when even a basic computer program could outsmart you, and you developed intelligence quite independently.
because they are non-sensing, stationary, and fundamentally not thinking
I don't follow, why would a machine need to be able to move or have its own sensors in order to be AGI? And can you define what you mean by "thinking"?
The argument is best made by Jeff Hawkins in his Thousand Brains book. I'll try to be convincing and brief at the same time, but you will have to be satisfied with shooting the messenger if I fail in either respect. The basic thrust of Hawkins' argument is that you can only build a true AGI once you have a theoretical framework that explains the activity of the brain with reference to its higher cognitive functions, and that such a framework necessarily must stem from doing the hard work of sorting out how the neocortex actually goes about its business.
We know that the neocortex is the source of our higher cognitive functions, and that it is the main area of interest to the development of AGI. A major part of Hawkins' theory states that because the neocortex is arranged into many small columns (cortical columns), it is chiefly the number of them that differs between creatures of different intelligence level, and it forms essentially a basic repeating unit across the whole of the neocortex to model and make predictions about the world based on sensory data. He holds that these columns vote amongst each other in realtime about what is being perceived, constantly piping up and shushing each other and changing their models based on updated data almost like a rowdy room full of parliamentarians trying to come to a consensus view, and that it is this ongoing internal hierarchy of models and perceptions that makes up our intelligence, as it were.
The reason I ventured to argue that sensorimotor integration is necessary for an AI to be an AGI is because I got that idea from him as well; in order to gather meaningful sensory data, you have to be able to move about your environment to make sense of your inputs. Merely receiving one piece of sensory data fails to make any particular impression, and you can test this for yourself by having a friend place an unknown object against your skin without moving it, and having you try to guess based on that one data point. Then, have them move the object and see how quickly you gather enough information to make a solid prediction - and if you were wrong, your brain will hastily rewire its models to update based on that finding. An AGI would similarly fail to make any useful contributions unless it has the ability to move about its environment (asterisk - that includes a virtual environment) in order to continually learn and make predictions. The sort of thing we cannot possibly expect from any conventional LLM, at least as far as I've heard so far.
I'd better stop there and see if you care to tolerate more of this sort of blather. I hope I've given you something to sink your teeth into, at any rate.
AGI can't come from these LLMs because they are non-sensing, stationary, and fundamentally not thinking at all.
AGI might be coming down the pipe, but not from these LLM vendors. I hope a player like Numenta, or any other nonprofit, open-source initiative manages to create AGI so that it can be a positive force in the world, rather than a corporate upward wealth transfer like most tech.
It is like raising a baby.
Machine Learning is like teaching computers how to learn new things.
Large Language Models is like teaching computers how to speak.
On its own it won't do much, it is only a small step. But a necessary one to reach the fully formed being that is an AGI.
(these are analogies, I don't believe in fully sentient AI yet)
Intelligence and consciousness are not related in the way you seem to think.
We've always known that you can have consciousness without a high level of intelligence (think of children, people with certain types of brain damage), and now for the first time, LLMs show us that you can have intelligence without consciousness.
It's naive to think that as we continue to develop intelligent machines, suddenly one of them will become conscious once it reaches a particular level of intelligence. Did you suddenly become conscious once you hit the age of 14 or whatever and had finally developed a deep enough understanding of trigonometry or a solid enough grasp of the works of Mark Twain? No of course not, you became conscious at a very early age, when even a basic computer program could outsmart you, and you developed intelligence quite independently.
I don't follow, why would a machine need to be able to move or have its own sensors in order to be AGI? And can you define what you mean by "thinking"?
The argument is best made by Jeff Hawkins in his Thousand Brains book. I'll try to be convincing and brief at the same time, but you will have to be satisfied with shooting the messenger if I fail in either respect. The basic thrust of Hawkins' argument is that you can only build a true AGI once you have a theoretical framework that explains the activity of the brain with reference to its higher cognitive functions, and that such a framework necessarily must stem from doing the hard work of sorting out how the neocortex actually goes about its business.
We know that the neocortex is the source of our higher cognitive functions, and that it is the main area of interest to the development of AGI. A major part of Hawkins' theory states that because the neocortex is arranged into many small columns (cortical columns), it is chiefly the number of them that differs between creatures of different intelligence level, and it forms essentially a basic repeating unit across the whole of the neocortex to model and make predictions about the world based on sensory data. He holds that these columns vote amongst each other in realtime about what is being perceived, constantly piping up and shushing each other and changing their models based on updated data almost like a rowdy room full of parliamentarians trying to come to a consensus view, and that it is this ongoing internal hierarchy of models and perceptions that makes up our intelligence, as it were.
The reason I ventured to argue that sensorimotor integration is necessary for an AI to be an AGI is because I got that idea from him as well; in order to gather meaningful sensory data, you have to be able to move about your environment to make sense of your inputs. Merely receiving one piece of sensory data fails to make any particular impression, and you can test this for yourself by having a friend place an unknown object against your skin without moving it, and having you try to guess based on that one data point. Then, have them move the object and see how quickly you gather enough information to make a solid prediction - and if you were wrong, your brain will hastily rewire its models to update based on that finding. An AGI would similarly fail to make any useful contributions unless it has the ability to move about its environment (asterisk - that includes a virtual environment) in order to continually learn and make predictions. The sort of thing we cannot possibly expect from any conventional LLM, at least as far as I've heard so far.
I'd better stop there and see if you care to tolerate more of this sort of blather. I hope I've given you something to sink your teeth into, at any rate.