AI may appear human, but it is an illusion we must tackle.
We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.
But what we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data (the situation hasn’t changed much since it was discussed here five years ago). When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.
This means AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance — nothing more, and nothing less.
So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.
Philosopher David Chalmers calls the mysterious mechanism underlying the relationship between our physical body and consciousness the “hard problem of consciousness”. Eminent scientists have recently hypothesised that consciousness actually emerges from the integration of internal, mental states with sensory representations (such as changes in heart rate, sweating and much more).
Given the paramount importance of the human senses and emotion for consciousness to “happen”, there is a profound and probably irreconcilable disconnect between general AI, the machine, and consciousness, a human phenomenon.
It's not even smart enough to be stupid: Measuring it by that scale would imply it's capable of thought and genuine self-awareness.
It is not. That is a misrepresentation given by people trying to make money off the hype around what has been labeled as "AI".
Actual AI does not exist and at the rate we're going, it seems unlikely that we will survive as a species long enough to create true general artificial intelligence.
For those of you who want a simplified ELI5 on how AI works:
Pretend I'm going to write a sentence. Statistically, most sentences start with the word "I". What word typically follows "I"? Looking at Lemmy, I'll pick "use" since that gives me the most options. Now what word typically follows the word "use" but also follows the phrase "I use"? With some math, I see "Arch" is statistically popular so I'll add that to my sentence.
Scale this out for every combination of words and sentences and you suddenly have AI.
Im glad to see they used the word anthropomorphize in the article. I think there is a certain amount of animism as well, although animism is generally a spiritual aspect, so I call this neo-anamism, maybe digitanamism, I dunno Im just making this up.
You could see it as a modern form of animism, or pantheism/panentheism. I actually subscribe to the latter as it seems clear that matter is an emergent property of consciousness (not the other way around), but I would ascribe AI as much consciousness as the silicate minerals it's derived from. Sentience can only truly be self-identified so we do have to go off the honor system to some degree, but if we look around at everything else that self-identifies as conscious, AI doesn't even remotely resemble it.
"AI" (read: LLM) isn't even in the same class as stupid people. It doesn't think at all, to suggest otherwise is a farce. It's incapable of actual thought, think of it more as autocorrect applied differently.
That is a good take. Sadly suffers from some of the same shortcomings as OP's article, mainly shitting on statistics, since not just LLMs run on maths but humans as well and the entire universe... But looking at it this way explains a lot of things. Why it blabbers and repeats a lot, why lots of people tell me how good it is at programming and I think it sucks... I'll have to bookmark this for the next person who doesn't believe me.
I think that AI that is created by breeding thousands of programs over hundreds of thousands of generations, each with a little bit of random corruption to allow for evolution, is real AI. Each generation has criteria that must be met and those programs that don't pass are selected out of existence. It basically teaches itself to walk in a physics simulation. Eventually the program can be transferred into a real robot that can walk around, nobody knows how it works and it would be hard to teach it anything, especially if the neuron count is small, but it's real AI. It's not really very intelligent but technically plants have intelligence.
Just mind that genetic algorithms, or more broadly evolutionary algorithms, are just one form of computational intelligence. There's a bunch of other methods and this is not how current large language models or ChatGPT work.
That train has left the station. It's literally in the name...
And this article contains a lot of debunked arguments like the stochastic parrot and so on. Also they just can't look at today's models and their crystal ball and then conclude intelligence is ruled out forever. That's not how science or truth works. Also yes it has no conciousness, No it in fact has something like knowledge, and again No, it does have goals, that's the fundamental principle of machine learning...
Edit: So, I strongly agree with the headline. The article itself isn't good at all. It's spiked with a lot of misinformation.
Ahem, the way it works is a model gets trained. That works by giving it a goal(!) and then the process modifies the weights to try to match the goal. By definition every AI or machine learning model needs to have a goal. With LLMs it's producing legible text. Text resembling the training dataset. By looking at previous words. That's the goal of the LLM. The humans designing it have a goal as well. Making it do what they want and the goals match is called "the alignment problem".
Simpler models have goals as well. Whatever is needed to regulate your thermostats. Or score high in Mario Kart. And goals aren't tied to conciousness. Companies for example have goals (profit), yet they're not alive. A simple control loop has a goal, and it's a super simple piece of tech.
Knowledge and reasoning are two different things. Knowledge is being able to store and retrieve information. It can do that. I'll ask it what an Alpaca is, it'll give me an essay / the Wikipedia article. Not anything else. And it can even apply knowledge. I can tell it to give me an animal alike an Alpaca for my sci-fi novel in the outer rim, and it'll make up an animal with similar attributes. Simultaneously knowing how sci-fi works, the tropes in it and how to apply it to the concept of an Alpaca. It knows how dogs and cats relate to each other, what attributes they have and what their category is. I can ask it about the paws or the tail and it "knows" how that's connected and it'll deal with the detail question. I can feed it two pieces of example computer code and tell it to combine both projects, despite no one ever doing it that way. And it'll even know how to use some of the background libraries.
It has all of that. Knowledge, is able to apply it, transfer it to new problems... You just can't antropomorphize it. It doesn't have intelligence or knowledge the same way a human has. It does it differently. But that's why it's called Artificial something.
Btw that's also why AI in robots works. They form a model of their surroundings. And then they're able to maneuvre there. Or move their arms not just randomly, but actually to pick something up. They "understood" aka. formed a model. That's also the main task of our brain. And the main idea of AI.
But yeah, they have a very different way to do knowledge than humans. The internal processes to apply it are very different. And the goals are entirely different. So if you mean in the sense of human goals, or human reasoning, then no. It definitely doesn't have that.
AIs are really just Axe Body spray, but for tech-illiterate executives.
When they say AI, they really refer to LLMs these days. LLMs are not deterministic, everything it does is by chance. It may be next to impossible to get conscious intelligence from it.
People love to scream and cry about how we are “anthropomorphizing animals” by saying no they do have actual emotions or feelings, and in every likelihood they do, yet are fully on board with an AI just being a digital super human brain