I wish philosophy was taught a bit more seriously.
An exploration on the philosophical concepts of simulacra and eidolons would probably change the way a lot of people view LLMs and other generative AI.
An alarming number of Hollywood screenwriters believe consciousness (sapience, self awareness, etc.) is a measurable thing or a switch we can flip.
At best consciousness is a sorites paradox. At worst, it doesn't exist and while meat brains can engage in sophisticated cognitive processes, we're still indistinguishable from p-zombies.
I think the latter is more likely, and will reveal itself when AGI (or genetically engineered smart animals) can chat and assemble flat furniture as well as humans can.
(On mobile. Will add definition links later.) << Done!
The LLM peddlers seem to be going for that exact result. That's why they're calling it "AI". Why is this surprising that non-technical people are falling for it?
This is an angle I've never considered before, with regards to a future dystopia with a corrupt AI running the show. AI might never advance beyond what it is in 2025, but because people believe it's a supergodbrain, we start putting way too much faith in its flawed output, and it's our own credulity that dismantles civilisation rather than a runaway LLM with designs of its own. Misinformation unwittingly codified and sanctified by ourselves via ChatGeppetto.
The call is coming from inside the house mechanical Turk!
Why are we so quick to assume machines cannot achieve consciousness?
Unless you can point to me the existence of a spirit or soul, there's nothing that makes our consciousness unique from what computers are capable of accomplishing.
Lots of attacks on Gen Z here, some points valid about the education that they were given from the older generations (yet it's their fault somehow). Good thing none of the other generations are being fooled by AI marketing tactics, right?
The debate on consciousness is one we should be having, even if LLMs themselves aren't really there. If you're new to the discussion, look up AI safety and the alignment problem. Then realize that while people think it's about preparing for a true AGI with something akin to consciousness and the dangers that we could face, we have have alignment problems without an artificial intelligence. If we think a machine (or even a person) is doing things because of the same reasons we want them done, and they aren't but we can't tell that, that's an alignment problem. Everything's fine until they follow their goals and the goals suddenly line up differently than ours. And the dilemma is - there's not any good solutions.
But back to the topic. All this is not the fault of Gen Z. We built this world the way it is and raised them to be gullible and dependent on technology. Using them as a scapegoat (those dumb kids) is ignoring our own failures.
They also are the dumbest generation with a COVID education handicap and the least technological literacy in terms of mechanics comprehension. They have grown up with technology that is refined enough to not need to learn troubleshooting skills past "reboot it".
How they don't understand that a LLM can't be conscious is not surprising. LLMs are a neat trick, but far from anything close to consciousness or intelligence.
I tried to explain a directory tree to one of them (a supposedly technical resource) for twenty minutes and failed. They're idiots. They were ruined by baby tech like iPhones, iPads, and now AI.
That's a matter of philosophy and what a person even understands "consciousness" to be. You shouldn't be surprised that others come to different conclusions about the nature of being and what it means to be conscious.
At some point in the mid-late 1990s, I recall having a (technically-inclined) friend who dialed up to a BBS and spent a considerable amount of time pinging and then chatting with Lisa, the "sysadmin's sister". When I heard about it, I spent quite some time arguing with him that Lisa was a bot. He was pretty convinced that she was human.
The world is going to be absolutely fucked when the older engineers and techies who built all this modern shit and/or maintain it and still understand it all retire or die off.
Whatever, couldn't it also be that a technical consciousness will look rather different from what we assume?
There are obviously less/none of some factors, ie emotional intelligence etc.
But a tech super intelligence, if ever reached, may have a number of unexpected problems for us. We should concentrate on unexpected outcomes and establish safeguards.