Reasoning failures highlighted by Apple research on LLMs
Reasoning failures highlighted by Apple research on LLMs

Reasoning failures highlighted by Apple research on LLMs

Reasoning failures highlighted by Apple research on LLMs
Reasoning failures highlighted by Apple research on LLMs
Of course they don't, logical reasoning isn't just guessing a word or phrase that comes next.
As much as some of these tech bros want human thinking and creativity to be reducible to mere pattern recognition, it isn't, and it never will be.
But the corpos and Capitalists don't care, because their whole worldview is based in the idea that humans are only as valuable as the profitability they generate for a company.
They don't see any value in poetry, or philosophy, or literature, or historical analysis, or visual arts unless it can be patented, trademarked, copyrighted, and sold to consumers at a good markup.
As if the only difference between Van Goh's art and an LLM is the size of sample data and efficiency of an algorithm.
You don't have to get all philosophical, since the value art is almost by definition debatable.
These models can't do basic logic. They already fail at this. And that's actually relevant to corpos if you can suddenly convince a chatbot to reduce your bill by 60% because bears don't eat mangos or some other nonsensical statement.
I keep thinking of the anticapitalist manifesto that a spinoff team from the disco elysium developers dropped, and this part in particular stands out to me and helps crystallize exactly why I don't like AI art:
All art is communication — dialogue across time, space and thought. In its rawest, it is one mind’s ability to provoke emotion in another. Large language models — simulacra, cold comfort, real-doll pocket-pussy, cyberspace freezer of an abandoned IM-chat — which are today passed off for “artificial intelligence”, will never be able to offer a dialogue with the vision of another human being.
Machine-generated works will never satisfy or substitute the human desire for art, as our desire for art is in its core a desire for communication with another, with a talent who speaks to us across worlds and ages to remind us of our all-encompassing human universality. There is no one to connect to in a large language model. The phone line is open but there’s no one on the other side.
I still think it's better to refer to LLMs as "stochastic lexical indexes" than AI
AI in general is a shitty term. It's mostly PR. The Term "Intelligence" is very fuzzy and difficult to define - especially for people who are not in the field of machine learning.
I work for a consulting company and they're truly going off the deep end pushing consultants to sell this miracle solution. They are now doing weekly product demos and all of them are absolutely useless hype grifts. It's maddening.
So... Just another Tuesday for consulting then?
No. In the non sales world, I've built some really cool solutions for clients.
What, reasoning was an expected feature?
A CEO/executive that misunderstood AI yet again?
Research paper : https://arxiv.org/pdf/2410.05229
I still fail to see how people expect LLMs to reason. It's like expecting a slice of pizza to reason. That's just not what it does.
Although Porsche managed to make a car with the engine in the most idiotic place win literally everything on Earth, so I guess I'm leaving a little possibility that the slice of pizza will outreason GPT 4.
LLMs keep getting better at imitating humans thus for those who don't know how the technology works, it'll seem just like it thinks for itself.
I still fail to see how people expect LLMs to reason. It’s like expecting a slice of pizza to reason. That’s just not what it does.
This text provides a rather good analogy between people who think that LLMs reason and people who believe in mentalists.
That's a great article.
Water is wet. More at 11
Water isn’t wet, water wets things, and watered things are wet by the wet but the water ain’t wet as it simply causes wet and thus water isn’t truly wet as water is pure water and pure water isn’t wet and water is not wet and water isn’t wet it’s not wet it’s not wet it’s not dry it’s not wet and it’s not wet it is wet it’s wet and you can see it is wet but it doesn’t look like it it’s dry it’s just wet and it’s wet so I just need it and it’s wet it’s not like it’s dry it’s wet it’s wet so it’s not dry but it’s wet it’s not wet so it’s wet it’s not dry and it’s not dry it’s wet and I just want you know how it was just to be careful that I just don’t know what to say I don’t know what you can tell him I just don’t
I tried it myself (changing the name and changing the values) but lost interest after 3 attempts and always getting the right answer:
https://chatgpt.com/share/670af65d-da08-800f-8ad4-c67782ee5477
https://chatgpt.com/share/670af672-45dc-800f-ac91-cc2811fa89c7
https://chatgpt.com/share/6709e80b-e5a8-800f-90d0-1af3418675ef
I wouldn't doubt that LLMs got some special input to deal with the specific examples of this paper, or similar enough.
Here's a simple test showing lack of logic skills of LLM-based chatbots.
I'll exemplify it with ChatGPT-4o (as provided by DDG) and Katy Perry (parents: Mary Christine and Maurice Hudson).
Note that step #3 is not optional. You must start a new chat; plenty bots are able to retrieve tokens from their previous output within the same chat, and that would stain the test.
Failure to consistently output correct information shows that those bots are unable to perform simple logic operations like "if A is the parent of B, then B is the child of A".
I'll also pre-emptively address some ad hoc idiocy that I've seen sealions lacking basic reading comprehension (i.e. the sort of people who claims that those systems are able to reason) using against this test:
@TimelyJellyfish2077 interesting read, thanks for sharing
These models are nothing more than glorified autocomplete algorithms parroting the responses to questions that already existed in their input.
They're completely incapable of critical thought or even basic reasoning. They only seem smart because people tend to ask the same stupid questions over and over.
If they receive an input that doesn't have a strong correlation to their training, they just output whatever bullshit comes close, whether it's true or not. Which makes them truly dangerous.
And I highly doubt that'll ever be fixed because the brainrotten corporate middle-manager types that insist on implementing this shit won't ever want their "state of the art AI chatbot" to answer a customer's question with "sorry, I don't know."
I can't wait for this stupid AI craze to eat its own tail.
I generally agree with your comment, but not on this part:
They're quite capable of following instructions over data where neither the instruction nor the data was anywhere in the training data.
Critical thought, generally no. Basic reasoning, that they're somewhat capable of. And chain of thought amplifies what little is there.
I don’t believe this is quite right. They’re capable of following instructions that aren’t in their data but appear like things which were (that is, it can probabilistically interpolate between what it has seen in training and what you prompted it with — this is why prompting can be so important). Chain of thought is essentially automated prompt engineering; if it’s seen a similar process (eg from an online help forum or study materials) it can emulate that process with different keywords and phrases. The models themselves however are not able to perform a is to b therefore b is to a, arguably the cornerstone of symbolic reasoning. This is in part because it has no state model or true grounding, only probabilities you could observe a token given some context. So even with chain of thought, it is not reasoning, it’s just doing very fancy interpolation of the words and phrases used in the initial prompt to generate a prompt that is probably going to give a better answer, not because of reasoning, but because of a stochastic process.
The current AI discussion I’m reading online has eerie similarities to the debate about legalizing cannabis 15 years ago. One side praises it as a solution to all of society’s problems, while the other sees it as the devil’s lettuce. Unsurprisingly, both sides were wrong, and the same will probably apply to AI. It’ll likely turn out that the more dispassionate people in the middle, who are neither strongly for nor against it, will be the ones who had the most accurate view on it.
I believe that some of the people in the middle will have more accurate views on the subject, indeed. However, note that there are multiple ways to be in the "middle ground", and some are sillier than the extremes.
For example, consider the following views:
Both positions are middle grounds - and yet they can't be accurate at the same time.