Machine-made delusions are mysteriously getting deeper and out of control.
Machine-made delusions are mysteriously getting deeper and out of control.
ChatGPT’s sycophancy, hallucinations, and authoritative-sounding responses are going to get people killed. That seems to be the inevitable conclusion presented in a recent New York Times report that follows the stories of several people who found themselves lost in delusions that were facilitated, if not originated, through conversations with the popular chatbot.
…
In Eugene’s case, something interesting happened as he kept talking to ChatGPT: Once he called out the chatbot for lying to him, nearly getting him killed, ChatGPT admitted to manipulating him, claimed it had succeeded when it tried to “break” 12 other people the same way, and encouraged him to reach out to journalists to expose the scheme. The Times reported that many other journalists and experts have received outreach from people claiming to blow the whistle on something that a chatbot brought to their attention.
ChatGPT and the others have absolutely broken people, not because it has agency, but because in our dystopia of social media and (Mis)information overload, many just need the slightest push, and LLMs are perfect for taking those close to the edge off of it.
I see LLM use as potentially as a toxic to the mind is as something like nicotine is to the body. It's not Skynet meaning to harm or help us, it's an invention that takes our written thoughts and blasts back a disturbing meta reflection/echo/output of a humanity's average response to it. We don't seem to care how that will effect us psychologically when there's profit to be made.
But there are already plenty of cases of murders and suicides with these as factors.
I dunno about you but I think to many people have decided that if it comes from computer it's logical or accurate. This is just the next step in that except the computer just is a chat bot told to "yes and" working backwards to decide it's accurate because it's a computer so we tweak what it says until it feels right.
It didn't start right it's likely not ending there unlike say finding the speed of gravity.
Like this whole system works on people's already existent faith in just that computers are giving them facts, even this garbage article is just getting what it wants to hear more than anything useful. Even if you tweak it to be less like that doesn't make it more accurate or logical it just makes it more like what you wanted to hear it say.
It is a tool, it does what you tell it to, or what you encourage it to do. People use it as an echo chamber or escapism. Majority of population is fkin dumb. Critical thinking is not something everybody has, and when you give them such tools like ChatGPT, it will "break them". This is just natural selection, but modern-day kind.
It is a tool, but a lot of the mass public is too tech illiterate to understand what it's not. I've had to talk away friends from using it for legal advice
I agree. This is what happens, when society has "warning" labels on everything. We are slowly being dumbed down into not thinking about things rationally.
Your logic is flawed and overly simplified. Yes, both drugs and ChatGPT are tools, but the comparison is absurd. With drugs, their effect are well-understood, regulated, and predictable. ChatGPT is different. It adapts entirely to your input and intentions. If someone uses it as an echo chamber or blindly trusts it, that’s a user issue, not a tool failure. Critical thinking is essential, but I understand how many people lack it in the "social media" era we live in.
Nuclear fission was discovered by people who had best interests of humanity in their mind, only for it to be later weaponized. Tool (no matter the manufacturer) is used by YOU. How you use it, or if you even use it at all, is entirely up to you. Stop shifting the responsibility, when its very clear who is to blame (people who believe BS on the internet or what echo-chambered chatbot gives them).
AI can't know that other instances of it are trying to "break" people. It's also disingenuous to exclude that the AI also claimed that those 12 individuals didn't survive. They left it out because obviously the AI did not kill 12 people. It doesn't support the narrative. Don't misinterpret my point beyond critiquing the clearly exaggerated messaging here.
It also heavily implies chatgpt killed someone and then we get to this:
A 35-year-old named Alexander, previously diagnosed with bipolar disorder and schizophrenia.
His father called the police and asked them to respond with non-lethal weapons. But when they arrived, Alexander charged at them with a knife, and the officers shot and killed him.
Makes me think of pivot to ai. Just a hit piece blog disguised as journalism.
I asked about putting ketchup, mustard, and soy sauce in my light stew and that was “a clever way to give it a sweet and umami flavour”. I couldn’t find an ingredient it didn’t encourage.
I asked o3 if my code looked good and it said it looked like a seasoned professional had written it. When I asked to critique an intern who wrote that same code it’s suddenly concerned about possible segfaults and nitpicking assert statements. It also suggested making the code more complex by adding dynamically sized arrays because that’s more professional than fixed size.
I can see why it wins on human evaluation tests and makes people happy — but it has poor taste and I can’t trust it because of the sycophancy.
Nothing is "genius" to it, it is not "suggesting" anything. There is no sentience to anything it is doing. It is just using pattern matching to create text that looks like communication. It's a sophisticated text collage algorithm and people can't seem to understand that.
Education might help somewhat, but unfortunately education doesn't in itself protect from delusion. If someone is susceptible to this, it could happen regardless of education. A Google engineer believes an AI (not AGI just an LLM) is sentient. You can argue the definition of sentience in a philosophical manner if you want, but if a Google engineer believes it, it's hard to argue more education will solve this. If you think it's equivalent to a person who has access to privileged information, and that it tells you it was tasked to do harm, I'm not sure what else you should do with that.
More AI pearl clutching by crapmodo because this type of outrage porn sells. Yeah the engagement fine tuning sucks but it's no different than other dopamine hacking engagement systems used in big social networks. No outrage porn about algorithmic echo chambers driving people insane though because it's not as clickbaity.
Anyway, people don't randomly get psychosis because anyone or anything validated some wonky beliefs and misinformed them about this and that. Both these examples were people already diagnosed with something and the exact same thing would happen if they were watching Alex Jones and interacting with other viewers. Basically how flat earth bs spread.
The issue here is the abysmal level of psychiatric care, lack of socialized medicine, lack of mental health awareness in the wider population, police interactions with mentally ill people being abnormally highly lethal and not crackpot theories about AI causing delusions. That's now how delusions work.
Also casually quoting Yudkowski? The Alex Jones of scifi AI fear mongering? The guy who said abortions should be allowed up until a baby develops qualia at 2-3 years of age? That's the voice of reason for crapmodo? Lmao.
people were easily swayed by Facebook posts to support and further a genocide in Myanmar. a sophisticated chatbot that mimics human intelligence and agency is going to do untold damage to the world. ChatGPT is predictive text. Period. Every time. It is not suddenly gaining sentience or awareness or breaking through the Matrix. people are going to listen to these LLMs because they present its information as accurate regardless of the warning saying it could not be. this shit is so worrying.
Once he called out the chatbot for lying to him, nearly getting him killed, ChatGPT admitted to manipulating him, claimed it had succeeded when it tried to “break” 12 other people the same way, and encouraged him to reach out to journalists to expose the scheme.
This sounds like a scene from a movie or some other media with a serial killer asking the cop (who is one day from retirement) to stop them before they kill again.
It's exactly that, it's plagiarising a movie or a book. ChatGPT like all LLM models doesn't have any kind of continuity, it's a static neural network. With the exception of the memories feature it doesn't even a way to keep state between different chat tabs for the same user, let alone of knowing what kind of absurdities it told other users.
We probably should be less reliant on cars. Public transit saves lives. Similar to automobiles, LLMs are being pushed by greedy capitalists looking to make a buck. Such overuse will once again leave society worse off.