With a leap in the evolution of large language models, some leading thinkers are questioning whether AI might become sentient
Do you think AI is, or could become, conscious?
I think AI might one day emulate consciousness to a high level of accuracy, but that wouldn't mean it would actually be conscious.
This article mentions a Google engineer who "argued that AI chatbots could feel things and potentially suffer". But surely in order to "feel things" you would need a nervous system right? When you feel pain from touching something very hot, it's your nerves that are sending those pain signals to your brain... right?
I don't believe that consciousness strictly exist. Probably, the phenomenon emerges from something like the attention schema. Ai exposes, I think, the uncomfortable fact that intelligence does not require a soul. That we evolved it, like legs with which to walk, and just as easily as robots can be made to walk, they can be made to think.
Are current LLMs as intelligent as a human? Not any LLM I've seen, but give it 100 trillion parameters instead of 2 trillion and maybe.
Really? I mean, it's melodramatic, but if you went throughout time and asked writers and intellectuals if a machine could write poetry, solve mathmatical equations, and radicalize people effectively t enough to cause a minor mental health crisis, I think they'd be pretty surprised.
LLMs do expose something about intelligence, which is that much of what we recognize as intelligence and reason can be distilled from sufficiently large quantities of natural language. Not perfectly, but isn't it just the slightest bit revealing?
Do you mean conventional software? Typically software doesn't exhibit emergent properties and operates within the expected parameters. Machine learning and statistically driven software can produce novel results, but typically that is expected. They are designed to behave that way.