AI may appear human, but it is an illusion we must tackle.
We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.
But what we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data (the situation hasn’t changed much since it was discussed here five years ago). When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.
This means AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance — nothing more, and nothing less.
So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.
Philosopher David Chalmers calls the mysterious mechanism underlying the relationship between our physical body and consciousness the “hard problem of consciousness”. Eminent scientists have recently hypothesised that consciousness actually emerges from the integration of internal, mental states with sensory representations (such as changes in heart rate, sweating and much more).
Given the paramount importance of the human senses and emotion for consciousness to “happen”, there is a profound and probably irreconcilable disconnect between general AI, the machine, and consciousness, a human phenomenon.
Philosophers are so desperate for humans to be special.
How is outputting things based on things it has learned any different to what humans do?
We observe things, we learn things and when required we do or say things based on the things we observed and learned. That's exactly what the AI is doing.
I don't think we have achieved "AGI" but I do think this argument is stupid.
How is outputting things based on things it has learned any different to what humans do?
Humans are not probabilistic, predictive chat models. If you think reasoning is taking a series of inputs, and then echoing the most common of those as output then you mustn't reason well or often.
If you were born during the first industrial revolution, then you'd think the mind was a complicated machine. People seem to always anthropomorphize inventions of the era.
If you were born during the first industrial revolution, then you'd think the mind was a complicated machine. People seem to always anthropomorphize inventions of the era.
When you typed this response, you were acting as a probabilistic, predictive chat model. You predicted the most likely effective sequence of words to convey ideas. You did this using very different circuitry, but the underlying strategy was the same.
By this logic we never came up with anything new ever, which is easily disproved if you take two seconds and simply look at the world around you. We made all of this from nothing and it wasn't a probabilistic response.
Your lack of creativity is not a universal, people create new things all of the time, and you simply cannot program ingenuity or inspiration.
Dude chatbots lie about their "internal reasoning process" because they don't really have one.
Writing is an offshoot of verbal language, which during construction for people almost always has more to do with sound and personal style than the popularity of words. It's not uncommon to bump into individuals that have a near singular personal grammar and vocabulary and that speak and write completely differently with a distinct style of their own. Also, people are terrible at probabilities.
As a person, I can also learn a fucking concept and apply it without having to have millions of examples of it in my "training data". Because I'm a person not a fucking statistical model.
But you know, you have to leave your house, touch grass, and actually listen to some people speak that aren't talking heads on television in order to discover that truth.
Is that why you love saying touch grass so much? Because it's your own personal style and not because you think it's a popular thing to say?
Or is it because you learned the fucking concept and not because it's been expressed too commonly in your "training data"? Honestly, it just sounds like you've heard too many people use that insult successfully and now you can't help but probabilistically express it after each comment lol.
Maybe stop parroting other people and projecting that onto me and maybe you'd sound more convincing.
Is that why you love saying touch grass so much? Because it’s your own personal style and not because you think it’s a popular thing to say?
In this discussion, it's a personal style thing combined with a desire to irritate you and your fellow "people are chatbots" dorks and based upon the downvotes I'd say it's working.
And that irritation you feel is a step on the path to enlightenment if only you'd keep going down the path. I know why I'm irritated with your arguments: they're reductive, degrading, and dehumanizing. Do you know why you're so irritated with mine? Could it maybe be because it causes you to doubt your techbro mission statement bullshit a little?
Who's a techbro, the fact that you can't even have a discussion without resorting to repeating a meme two comments in a row and accusing someone with a label so you can stop thinking critically is really funny.
Is it techbro of me to think that pushing AI into every product is stupid? Is it tech bro of me to not assume immediately that humans are so much more special than simply organic thinking machines? You say I'm being reductive, degrading, and dehumanising, but that's all simply based on your insecurity.
I was simply being realistic based on the little we know of the human brain and how it works, it is pretty much that until we discover this special something that makes you think we're better than other neural networks. Without this discovery, your insistence is based on nothing more than your own desire to feel special.
Is it tech bro of me to not assume immediately that humans are so much more special than simply organic thinking machines?
Yep, that's a bingo!
Humans are absolutely more special than organic thinking machines. I'll go a step further and say all living creatures are more special than that.
There's a much more interesting discussion to be had than "humans are basically chatbot" but it's this line of thinking that I find irritating.
If humans are simply thought processes or our productive output then once you have a machine capable of thinking similarly (btw chatbots aren't that and likely never will be) then you can feel free to dispose of humanity. It's a nice precursor to damning humanity to die so that you can have your robot army take over the world.
Humans are absolutely more special than organic thinking machines. I'll go a step further and say all living creatures are more special than that.
Show your proof, then. I've already said what I need to say about this topic.
If humans are simply thought processes or our productive output then once you have a machine capable of thinking similarly (btw chatbots aren't that and likely never will be) then you can feel free to dispose of humanity.
We have no idea how humans think, yet you're so confident that LLMs don't and never will be similar? Are you the Techbro now, because you're speaking so confidently on something that I don't think can be proven at this moment. I typically associate that with Techbros trying to sell their products. Also, why are you talking about disposing humanity? Your insecurity level is really concerning.
Understanding how the human brain works is a wonderful thing that will let us unlock better treatment for mental health issues. Being able to understand them fully means we should also be able to replicate them to a certain extent. None of this involves disposing humans.
It's a nice precursor to damning humanity to die so that you can have your robot army take over the world.
This is just more of you projecting your insecurity onto me and accusing me of doing things you fear. All I've said was that humans thoughts are also probabilistic based on the little we know of them. The fact that your mind wander so far off into thoughts about me justifying a robot army takeover of the world is just you letting your fear run wild into the realm of conspiracy theory. Take a deep breathe and maybe take your own advice and go touch some grass.
Wdym? That depends on what I'm working on. For pressing issues like raising energy consumption, CO2 emissions and civil privacy / social engineering issues I propose heavy data center tarrifs for non-essentials (like "AI"). Humanity is going the wrong way on those issues, so we can have shitty memes and cheat at school work until earth spits us out. The cost is too damn high!
If you don't think humans can conceive of new ideas wholesale, then how do you think we ever invented anything (like, for instance, the languages that chat bots write)?
Also, you're the one with the burden of proof in this exchange. It's a pretty hefty claim to say that humans are unable to conceive of new ideas and are simply chatbots with organs given that we created the freaking chat bot you are convinced we all are.
You may not have new ideas, or be creative. So maybe you're a chatbot with organs, but people who aren't do exist.
Haha coming in hot I see. Seems like I've touched a nerve. You don't know anything about me or whether I'm creative in any way.
All ideas have basis in something we have experienced or learned. There is no completely original idea. All music was influenced by something that came before it, all art by something the artist saw or experienced. This doesn't make it bad and it doesn't mean an AI could have done it
You seem to think that one day somebody invented the first language, or made the first song?
There was no "first language" and no "first song". These things would have evolved from something that was not quite a full language, or not quite a full song.
Animals influenced the first cave painters, that seems pretty obvious.
Yeah dude at one point there was no languages and no songs. You can get into "what counts as a language" but at one point there was none. Same with songs.
Language specifically was pretty unlikely to be an individual effort, but at one point people grunting at each other became something else entirely.
Your whole "there is nothing new under the sun" way of thinking is just an artifact of the era you were born in.
Haha wtf are you talking about. You have no idea what generation I am, you don't know how old I am and I never said there is nothing new under the sun.
Pointing out that humans are not the same as a computer or piece of software on a fundamental level of form and function is hardly philosophical. It’s just basic awareness of what a person is and what a computer is. We can’t say at all for sure how things work in our brains and you are evangelizing that computers are capable of the exact same thing, but better, yet you accuse others of not understanding what they’re talking about?