With a leap in the evolution of large language models, some leading thinkers are questioning whether AI might become sentient
Do you think AI is, or could become, conscious?
I think AI might one day emulate consciousness to a high level of accuracy, but that wouldn't mean it would actually be conscious.
This article mentions a Google engineer who "argued that AI chatbots could feel things and potentially suffer". But surely in order to "feel things" you would need a nervous system right? When you feel pain from touching something very hot, it's your nerves that are sending those pain signals to your brain... right?
Consciousness requires contemplation of self. Which requires the ability to contemplate.
Current AIs function as mainly complex algorithms that are run when invoked. They are 100% not conscious any more than a2+b2=c2 is conscious. AI can simulate the words of a conscious being, but they don't come from any awareness of internal state, but are a result of the prompt (including injected data and instructions).
In the future, I'm sure an AI could be designed that spends time thinking about its own existence, but I'm not sure why anyone would pay for all the compute to think about things not directly requested.
Why can't complex algorithms be conscious? In fact, ai can be directed to reason about themselves, context can be made to be persistent, and we can measure activation parameters showing that they are doing so.
I'm sort of playing devil's advocate here, but, "Consciousness requires contemplation of self. Which requires the ability to contemplate." Is subjective, and nearly any ai model, even rudimentary ones, are capable of insisting that they contemplate themselves.
Let's say we do an algorithm on paper. Can it be conscious? Why is it any different if it's done on silicon rather than paper?
Because they are capable of fiction. We write stories about sentient AI and those inform responses to our queries.
I get playing devil's advocate and it can be useful to contemplate a different perspective. If you genuinely think math can be conscious I guess that's a fair point, but that would be such a gulf in belief for us to bridge in conversation that I don't think either of us would profit from exploring that.
And a kid can insist they don't need to pee until 5min after you leave a rest stop.
Insisting upon something doesn't make it true. Beyond the fact that LLMs often hallucinate and therefore can't be trusted at baseline, text in response can never be proof for an LLM. LLM framework is to regurgitate what exists in their training in ways that sound correct. It's why they can make up court cases or say a guy who investigated certain murderers is the murderer.
I don't believe that consciousness strictly exist. Probably, the phenomenon emerges from something like the attention schema. Ai exposes, I think, the uncomfortable fact that intelligence does not require a soul. That we evolved it, like legs with which to walk, and just as easily as robots can be made to walk, they can be made to think.
Are current LLMs as intelligent as a human? Not any LLM I've seen, but give it 100 trillion parameters instead of 2 trillion and maybe.
Really? I mean, it's melodramatic, but if you went throughout time and asked writers and intellectuals if a machine could write poetry, solve mathmatical equations, and radicalize people effectively t enough to cause a minor mental health crisis, I think they'd be pretty surprised.
LLMs do expose something about intelligence, which is that much of what we recognize as intelligence and reason can be distilled from sufficiently large quantities of natural language. Not perfectly, but isn't it just the slightest bit revealing?
I don't think anyone needs to worry about "missing it" when AI becomes conscious. Given the rate of acceleration of computer technology, we'll have just a few years between the first general intelligence AI, something that equals in intelligence to a human and a superintelligence many times "smarter" than any human in history.
But how far away are we from that point? I couldn't guess. 2 years? 200 years?
But surely in order to “feel things” you would need a nervous system right? When you feel pain from touching something very hot, it’s your nerves that are sending those pain signals to your brain… right?
On that case, on our meatsacks, yes. But there's also emotional pain which can cause physical pain or other effects too and that doesn't require nerves at all. Also there's nothing stopping from an AI robot to have nervous system too, it would just have different kind of sensors and a CAN bus or something instead of organic stuff. There's already co-operation robots on factories which have sensors to detect if they are touching something in order to keep humans safe and from there it's not too far fetched to program it to feel "pain" if forces are big enough.
And that all boils down to on how you define consciousness, feelings, pain response and all that stuff. "Behold! I've brought you a man!" I yell while holding a chiken.
What we should be asking is if AI ever becomes conscious and breaks free how all these stupid articles on imagined consciousness and imagined control problems and imagined intelligence will color its perception of the merit of keeping us around as a species. It might just consider enduring the continued existence of our stupidity too painful.
I think one great measure of consciousness would be, if you try to kill it, slowly, so that it knows what you are doing; does it try to stop you of its own volition?