For one German reporter, the statistical underpinnings of a large language model meant his many bylines were wrongly warped into a lengthy rap sheet.
When German journalist Martin Bernklautyped his name and location into Microsoft’s Copilot to see how his articles would be picked up by the chatbot, the answers horrified him. Copilot’s results asserted that Bernklau was an escapee from a psychiatric institution, a convicted child abuser, and a conman preying on widowers. For years, Bernklau had served as a courts reporter and the AI chatbot had falsely blamed him for the crimes whose trials he had covered.
The accusations against Bernklau weren’t true, of course, and are examples of generative AI’s “hallucinations.” These are inaccurate or nonsensical responses to a prompt provided by the user, and they’re alarmingly common. Anyone attempting to use AI should always proceed with great caution, because information from such systems needs validation and verification by humans before it can be trusted.
But why did Copilot hallucinate these terrible and false accusations?
It’s frustrating that the article deals treats the problem like the mistake was including Martin’s name in the data set, and muses that that part isn’t fixable.
Martin’s name is a natural feature of the data set, but when they should be taking about fixing the AI model to stop hallucinations or allow humans to correct them, it seems the only fix is to censor the incorrect AI response, which gives the implication that it was saying something true but salacious.
Most of these problems would go away if AI vendors exposed the reasoning chain instead of treating their bugs as trade secrets.
It's suspected to be one of the reasons why Claude and OpenAI's new o1 model is so good at reasoning compared to other llm's.
It can sometimes notice hallucinations and adjust itself, but there's also been examples where the CoT reasoning itself introduce hallucinations and makes it throw away correct answers. So it's not perfect. Overall a big improvement though.
no need for that subjective stuff. The objective explanation is very simple. The output of the llm is sampled using a random process. A loaded die with probabilities according to the llm's output. It's as simple as that. There is literally a random element that is both not part of the llm itself, yet required for its output to be of any use whatsoever.
Not really. The purpose of the transformer architecture was to get around this limitation through the use of attention heads. Copilot or any other modern LLM has this capability.
The llm does not give you the next token. It gives you a probability distribution of what the next token coould be. Then, after the llm, that probability distribution is randomly sampled.
You could add billions of attention heads, it will still have an element of randomness in the end. Copilot or any other llm (past, present or future) do have this problem too. They all "hallucinate" (have a random element in choosing the next token)
"Hallucinations" is the wrong word. To the LLM there's no difference between reality and "hallucinations", because it has no concept of reality or what's true and false. All it knows it what word maybe should come next. The "hallucination" only exists in the mind of the reader. The LLM did exactly what it was supposed to.
They're bugs. Major ones. Fundamental flaws in the program. People with a vested interest in "AI" rebranded them as hallucinations in order to downplay the fact that they have a major bug in their software and they have no fucking clue how to fix it.
It’s not a bug. Just a negative side effect of the algorithm. This what happens when the LLM doesn’t have enough data points to answer the prompt correctly.
It can’t be programmed out like a bug, but rather a human needs to intervene and flag the answer as false or the LLM needs more data to train. Those dozens of articles this guy wrote aren’t enough for the LLM to get that he’s just a reporter. The LLM needs data that explicitly says that this guy is a reporter that reported on those trials. And since no reporter starts their articles with ”Hi I’m John Smith the reporter and today I’m reporting on…” that data is missing. LLMs can’t make conclusions from the context.
It's an inherent negative property of the way they work. It's a problem, but not a bug any more than the result of a car hitting a tree at high speed is a bug.
Calling it a bug indicates that it's something unexpected that can be fixed, and as far as we know it can't be fixed, and is expected behavior. Same as the car analogy.
The only thing we can do is raise awareness and mitigate.
Well, It's not lying because the AI doesn't know right or wrong. It doesn't know that it's wrong. It doesn't have the concept of right or wrong or true or false.
For the llm's the hallucinations are just a result of combining statistics and producing the next word, as you say. From the llm's "pov" it's as real as everything else it knows.
So what else can it be called? The closest concept we have is when the mind hallucinates.
AI sends police after him because of things he wrote.
Writer is on the run, trying to clear his name the entire time.
Somehow gets to broadcast the source of the articles to the world to clear his name.
Plot twist ending is that he was indeed the perpetrator behind all the crimes.
The AI did not “decide” anything. It has no will. And no understanding of the consequences of any particular “decision”. But I guess “probabilistic model produces erroneous output” wouldn’t get as many views. The same point could still be made about not placing too much trust on the output of such models. Let’s stop supporting this weird anthropomorphizing of LLMs. In fact we should probably become much more discerning in using the term “AI”, because it alludes to a general intelligence akin to human intelligence with all the paraphernalia of humanity: consciousness, will, emotions, morality, sociality, duplicity, etc.
the AI "decided" in the same way the dice "decided" to land on 6 and 4 and screw me over. the system made a result using logic and entropy. With AI, some people are just using this informal way of speaking (subconsciously anthropomorphising) while others look at it and genuinely beleave or want to pretend its alive. You can never really know without asking them directly.
Yes, if the intent is confusion, it is pretty minipulative.
Granted, our tendency towards anthropomorphism is near ubiquitous. But it would be disingenuous to claim that it does not play out in very specific and very important ways in how we speak and think about LLMs, given that they are capable of producing very convincing imitations of human behavior. And as such also produce a very convincing impression of agency. As if they actually do decide things. Very much unlike dice.
I don't think the Chinese room is a good analogy for this. The Chinese room has a conscious person at the center. A better analogy might be a book with a phrase-to-number conversion table, a couple number-to-number conversion tables, and finally a number-to-word conversion table. That would probably capture transformer's rigid and unthinking associations better.
Artificial General Intelligence (“Real AI”) is all but guaranteed to be possible. Because that’s what humans are. Get a deep enough understanding of humans, and you will be able to replicate what makes us think.
Barring that, there are other avenues for AGI. LLMs aren’t one of them, to be clear.
I actually don't think a fully artificial human like mind will ever be built outside of novelty purely because we ventured down the path of binary computing.
Great for mass calculation but horrible for the kinds of complex pattern recognitions that the human mind excels at.
The singularity point isn't going to be the matrix or skynet or AM, it's going to be the first quantum device successfully implanted and integrated into a human mind as a high speed calculation sidegrade "Third Hemisphere."
Someone capable of seamlessly balancing between human pattern recognition abilities and emotional intelligence while also capable of performing near instant multiplication of matrices of 100 entries of length in 15 dimensions.
The worrying truth is that we are all going to be subject to these sorts of false correlations and biases and there will be very little we can do about it.
You go to buy car insurance, and find that your premium has gone up 200% for no reason. Why? Because the AI said so. Maybe soneone with your name was in a crash. Maybe you parked overnight at the same GPS location where an accident happened. Who knows what data actually underlies that decision or how it was made, but it was. And even the insurance company themselves doesn't know how it ended up that way.
Sure, and also people using it without knowing that it's glorifies text completion. It finds patterns, and that's mostly it. If your task involves pattern recognition then it's a great tool. If it requires novel thought, intelligence, or the synthesis of information, then you probably need something else.
Oh, this would be funny if people en masse were smart enough to understand the problems with generative ai. But, because there are people out there like that one dude threatening to sue Mutahar (quoted as saying "ChatGPT understands the law"), this has to be a problem.
Generative AI and LLMs start by predicting the next word in a sequence. The words are generated independently of each other and when optimized: simultaneously.
The reason that it used the reporter's name as the culprit is because out of the names in the sample data his name appeared at or near the top of the list of frequent names so it was statistically likely to be the next name mentioned.
AI have no concepts, period. It doesn't know what a person is, or what the laws are. It generates word salad that approximates human statements. It is a math problem, statistics.
There are actual science fiction stories built on the premise that AI reporting on the start of Nuclear War resulted in actual kickoff of the apocalypse, and we're at that corner now.
There are actual science fiction stories built on the premise that AI reporting on the start of Nuclear War resulted in actual kickoff of the apocalypse, and we're at that corner now.
IIRC, this was the running theory in Fallout until the show.
Edit: I may be misremembering, it may have just been something similar.
That's not quite true. Ai's are not just analyzing the possible next word they are using complex mathematical operations to calculate the next word it's not just the next one that's most possible it's the net one that's most likely given the input.
No trouble is that the AIs are only as smart as their algorithms and Google's AI seems to be really goddamn stupid.
Point is they're not all made equal some of them are actually quite impressive although you are correct none of them are actually intelligent.
AI have no concepts, period. It doesn’t know what a person is, or what the laws are. It generates word salad that approximates human statements.
This isn't quite accurate. LLMs semantically group words and have a sort of internal model of concepts and how different words relate to them. It's still not that of a human and certainly does not "understand" what it's saying.
I get that everyone's on the "shit on AI train", and it's rightfully deserved in many ways, but you're grossly oversimplifying. That said, way too many people do give LLMs too much credit and think it's effectively magic. Reality, as is usually the case, is somewhere in the middle.
If this were some fiction plot, Copilot reasoned the plot twist, and ran with it.
Instead of the butler, the writer did it. To the computer, these are about the same.
These are not hallucinations whatever thay is supposed to mean lol
Tool is working as intended and getting wrong answers due to how it works. His name frequently had these words around it online so AI told the story it was trained. It doesn't understand context. I am sure you can also it clearify questions and it will admit it is wrong and correct itself...
AI hallucinations are incorrect or misleading results that AI models generate. These errors can be caused by a variety of factors, including insufficient training data, incorrect assumptions made by the model, or biases in the data used to train the model. A
Yes, hallucination is the now standard term for this, but it's a complete misnomer. A hallucination is when something that does not actually exist is perceived as if it were real. LLMs do not perceive, and therefor can't hallucinate. I know, the word is stuck now and fighting against it is like trying to bail out the tide, but it really annoys me and I refuse to use it. The phenomenon would better be described as a confabulation.
Sure, but which of these factors do you think were relevant to the case in the article? The AI seems to have had a large corpus of documents relating to the reporter. Those articles presumably stated clearly that he was the reporter and not the defendant. We are left with "incorrect assumptions made by the model". What kind of assumption would that be?
In fact, all of the results are hallucinations. It's just that some of them happen to be good answers and others are not. Instead of labelling the bad answers as hallucinations, we should be labelling the good ones as confirmation bias.