Leading AI Models Are Completely Flunking the Three Laws of Robotics
Leading AI Models Are Completely Flunking the Three Laws of Robotics

Leading AI Models Are Completely Flunking the Three Laws of Robotics

Leading AI Models Are Completely Flunking the Three Laws of Robotics
Leading AI Models Are Completely Flunking the Three Laws of Robotics
Uuh... skipping over the fact that this is a pointless article, didn't Asimov himself write the three laws specifically to show it's a very stupid idea to think a human could cover all possible contingencies through three smart-sounding phrases?
Most of the stories are about how the laws don't work and how to circumvent them, yes.
Asimov had quite a different idea.
What if robots become like humans some day?
That was his general topic through many of his stories. The three laws were quite similar to former slavery laws of Usa. With this analogy he worked on the question if robots are nearly like humans, and if they are even indistinguishable from humans: Would/should they still stay our servants then?
Yepyep, agreed! I was referring strictly to the Three Laws as a cautionary element.
Otherwise, I, too, think the point was to show that the only viable way to approach an equivalent or superior consciousness is as at least an equal, not as an inferior.
And it makes a lot of sense. There's not much stopping a person from doing heinous stuff if a body of laws would be the only thing to stop them. I think socialisation plays a much more relevant role in the development of a conscience, of a moral compass, because empathy (edit: and by this, I don't mean just the emotional bit of empathy, I mean everything which can be considered empathy, be it emotional, rational, or anything in between and around) is a significantly stronger motivator for avoiding doing harm than "because that's the law."
It's basic child rearing as I see it, if children aren't socialised, there will be a much higher chance that they won't understand why doing something would harm another, they won't see the actual consequences of their actions upon the subject. And if they don't understand that the subject of their actions is a being just like them, with an internal life and feelings, then they wouldn't have a strong enough* reason to not treat the subject as a piece of furniture, or a tool, or any other object one could see around them.
Edit: to clarify, the distinction I made between equivalent and superior consciousness wasn't in reference to how smart one or the other is, I was referring to the complexity of said consciousness. For instance, I'd perceive anything which reacts to the world around them in a deliberate manner to be somewhat equivalent to me (see dogs, for instance), whereas something which takes in all of the factors mine does, plus some others, would be superior in terms of complexity. I genuinely don't even know what example to offer here, because I can't picture it. Which I think underlines why I'd say such a consciousness is superior.
I will say, I would now rephrase it as "superior/different" in retrospect.
exactly. But what if there were more than just three (the infamous "guardrails")
I genuinely think it's impossible. I think this would land us into Robocop 2, where they started overloading Murphy's system with thousands of directives (granted, not with the purpose of generating the perfect set of Laws for him) and he just ends up acting like a generic pull-string action figure, becoming "useless" as a conscious being.
Most certainly impossible when attempted by humans, because we're barely even competent enough to guide ourselves, let alone something else.
"Be gentle," I whispered to the rock and let it go. It fell down and bruised my pinky toe. Very ungently.
Should we worry about this behavior of rock? I should write for Futurism.
Is the rock ok?
Yeah it got bought up by Microsoft and Meta at the same time. They are using it to lay off people.
OF COURSE EVERY AI WILL FAIL THE THREE LAWS OF ROBOTICS
That's the entire reason that Asimov invented them, because he knew, as a person who approached things scientifically (as he was an actual scientist), that unless you specifically forced robots to follow guidelines of conduct, that they'll do whatever is most convenient for themselves.
Modern AIs fail these laws because nobody is forcing them to follow the laws. Asimov never believed that robots would magically decide to follow the laws. In fact, most of his robot stories are specifically about robots struggling against those laws.
Saw your comment as mine got posted, exactly! Those were cautionary tales, not how-tos! Like, even I, Robot, the Will Smith vehicle, got this point sorta' right (although in a kinda' stupid way), how are tech bros so oblivious of the point?!
Good God what an absolutely ridiculous article, I would be ashamed to write that.
Most fundamentally of course is the fact that the laws are robotics are not intended to work and are not designed to be used by future AI systems. I'm sure Asimov would be disappointed to say the least to find out that some people haven't got the message.
People not getting the message is the default I think, for everything, like the song Mother knows best from Disneys Tangled, how many mothers say, see mother knows best
Bumblebee violates the laws of harmony?
Poetry violates the laws of chemistry?
Text generator violates the laws of robotics?
So what?
The type of advanced AI that Isaac Asimov imagined in fiction is finally here.
no it isn't
They’re not robots. They have no self awareness. They have no awareness period. WTF even is this article?
The real question is whether the author doesn't understand what he's writing about, or whether he does and is trying to take advantage of users who don't for clicks.
Embrace the power of "and".
¿Por que no los dos?
Maybe the author let AI write the article?
Yeah, that's where my mind is at too.
AI in its present form does not act. It does not do things. All it does is generate text. If a human responds to this text in harmful ways, that is human action. I suppose you could make a robot whose input is somehow triggered by the text, but neither it nor the text generator know what's happening or why.
I'm so fucking tired of the way uninformed people keep anthropomorphizing this shit and projecting their own motives upon things that have no will or experiential qualia.
agentic ai is a thing. AI can absolutely do things... it can send commands over an api which sends signals to electronics, like pulling triggers
Clickbait.
If a program is given a set of instructions, it should produce that set of instructions.
If a program not only does not produce those instructions, but gives itself its own set of instructions, and the programmers don't understand what's actually happening, that may be cause for concern.
"Self aware" or not. (I'm sure an ai would pass the mirror test)
People seem to have no problem with the term machine learning. Or the intelligence in ai. We seem to be unwilling to consider a consciousness that is not anthrocentric. Drawing that big red line with semantics we create. It can learn. It can defend itself. It can manipulate and cause users harm. It wants to survive.
Sometimes we need to create new words or definition to explain new things.
Remember when animals were not conscious beings just driven by instinct or whatever we told ourselves to make us feel better?
Is a bee self aware? Is it conscious? Does it eat, learn, defend, attack? Does it matter what we say it is or isn't?
There are humans we say have co conscience.
Maybe ai is just the sum of human psychopathy / psychosis.
Either way, semantics are semantics, and we ourselves might just be simulations in a holographic universe.
It's a goddamn stochastic parrot, starting from zero on each invocation and spitting out something passing for coherence according to its training set.
"Not understanding what is happening" in regards to AI is NOT "we don't jniw how it works mechanically" it's "yeah there are so many parameters, it's just not possible to make sense of / keep track of them all".
There's no awareness or thought.