The increasingly human-like way people are engaging with language models Read the full report | Download this report (PDF) | Survey methdology and topline (PDF) March 12, 2025 Half of...
Half of LLM users (49%) think the models they use are smarter than they are, including 26% who think their LLMs are “a lot smarter.” Another 18% think LLMs are as smart as they are. Here are some of the other attributes they see:
Confident: 57% say the main LLM they use seems to act in a confident way.
Reasoning: 39% say the main LLM they use shows the capacity to think and reason at least some of the time.
Sense of humor: 32% say their main LLM seems to have a sense of humor.
Morals: 25% say their main model acts like it makes moral judgments about right and wrong at least sometimes.
Sarcasm: 17% say their prime LLM seems to respond sarcastically.
Sad: 11% say the main model they use seems to express sadness, while 24% say that model also expresses hope.
I'm 100% certain that LLMs are smarter than half of Americans. What I'm not so sure about is that the people with the insight to admit being dumber than an LLM are the ones who really are.
LLMs are made to mimic how we speak, and some can even pass the Turing test, so I'm not surprised that people who don't know better think of these LLMs as conscious in some way or another.
It's not a necessarily a fault on those people, it's a fault on how LLMs are purposefully misadvertised to the masses
"Half of LLM users " beleive this. Which is not to say that people who understand how flawed LLMs are, or what their actual function is, do not use LLMs and therefore arent i cluded in this statistic?
This is kinda like saying '60% of people who pay for their daily horoscope beleive it is an accurate prediction'.
I had to tell a bunch of librarians that LLMs are literally language models made to mimic language patterns, and are not made to be factually correct. They understood it when I put it that way, but librarians are supposed to be "information professionals". If they, as a slightly better trained subset of the general public, don't know that, the general public has no hope of knowing that.
LLMs are smart in the way someone is smart who has read all the books and knows all of them but has never left the house. Basically all theory and no street smarts.
Think of a question that you would ask an average person and then think of what the LLM would respond with. The vast majority of the time the llm would be more correct than most people.
LLMs don't even think. Four year olds are more coherent. Given the state of politics, the people thinking LLMs are smarter than them are probably correct.
No, about a quarter of U.S. adults believe LLMs are smarter than they are. Only about half of adults are LLM users, and only about half of those users think that.
I wouldn't be surprised if that is true outside the US as well. People that actually (have to) work with the stuff usually quickly learn, that its only good at a few things, but if you just hear about it in the (pop-, non-techie-)media (including YT and such), you might be deceived into thinking Skynet is just a few years away.
I don't think a single human who knows as much as chatgpt does exists. Does that mean chatgpt is smarter then everyone? No. Obviously not based on what we've seen so far. But the amount of information available to these LLMs is incredible and can be very useful. Like a library contains a lot of useful information but isn't intelligent itself.
If I think of what causes the average person to consider another to be “smart,” like quickly answering a question about almost any subject, giving lots of detail, and most importantly saying it with confidence and authority, LLMs are great at that shit!
They might be bad reasons to consider a person or thing “smart,” but I can’t say I’m surprised by the results. People can be tricked by a computer for the same reasons they can be tricked by a human.
This is sad. This does not spark joy. We're months from someone using "but look, ChatGPT says..." To try to win an argument. I can't wait to spend the rest of my life explaining to people that LLMs are really fancy bullshit generator toys.
Aside from the unfortunate name of the university, I think that part of why LLMs may be perceived as smart or 'smarter' is because they are very articulate and, unless prompted otherwise, use proper spelling and grammar, and tend to structure their sentences logically.
Which 'smart' humans may not do, out of haste or contextual adaptation.
Just a thought, perhaps instead of considering the mental and educational state of the people without power to significantly affect this state, we should focus on the people who have power.
For example, why don't LLM providers explicitly and loudly state, or require acknowledgement, that their products are just imitating human thought and make significant mistakes regularly, and therefore should be used with plenty of caution?
It's a rhetorical question, we know why, and I think we should focus on that, not on its effects. It's also much cheaper and easier to do than refill years of quality education in individuals heads.
I wasn't sure from the title if it was "Nearly half of U.S. adults believe LLMs are smarter than [the US adults] are." or "Nearly half of U.S. adults believe LLMs are smarter than [the LLMs actually] are." It's the former, although you could probably argue the latter is true too.
Either way, I'm not surprised that people rate LLMs intelligence highly. They obviously have limited scope in what they can do, and hallucinating false info is a serious issue, but you can ask them a lot of questions that your typical person couldn't answer and get a decent answer. I feel like they're generally good at meeting what people's expectations are of a "smart person", even if they have major shortcomings in other areas.
Wow. Reading these comments so many people here really don't understand how LLMs work or what's actually going on at the frontier of the field.
I feel like there's going to be a cultural sonic boom, where when the shockwave finally catches up people are going to be woefully under prepared based on what they think they saw.
They are. Unless you can translate what I'm saying to any language I tell you to on the fly, I'm going to assume that anyone that tells me they are smarter than LLMs are lower on the spectrum than usual. Wikipedia and a lot of libraries are also more knowledgeable than me, who knew. If I am grateful for one thing, it is that I am not one of those people whose ego has to be jizzing everywhere, including their perception of things.
As far as I can tell from the article, the definition of "smarter" was left to the respondents, and "answers as if it knows many things that I don't know" is certainly a reasonable definition -- even if you understand that, technically speaking, an LLM doesn't know anything.
As an example, I used ChatGPT just now to help me compose this post, and the answer it gave me seemed pretty "smart":
what's a good word to describe the people in a poll who answer the questions? I didn't want to use "subjects" because that could get confused with the topics covered in the poll.
"Respondents" is a good choice. It clearly refers to the people answering the questions without ambiguity.
The poll is interesting for the other stats it provides, but all the snark about these people being dumber than LLMs is just silly.
Don’t they reflect how you talk to them? Ie: my chatgpt doesn’t have a sense of humor, isn’t sarcastic or sad. It only uses formal language and doesn’t use emojis. It just gives me ideas that I do trial and error with.
I suppose some of that comes down to the personal understanding of what "smart" is.
I guess you could call some person, that doesn't understand a topic, but still manages to sound reasonable when talking about it, and might even convince people that they actually have a deep understanding of that topic, "smart", in a kind of "smart imposter".
An LLM is roughly as smart as the corpus it is summarizing is accurate for the topic, because at their best they are good at creating natural language summarizers. Most of the main ones basically do an internet search and summarize the top couple of results, which means they are as good as the search engine backing them. Which is good enough for a lot of topics, but...not so much for the rest.