Recent math benchmarks for large language models (LLMs) such as MathArena indicate that state-of-the-art reasoning models achieve impressive performance on mathematical competitions like AIME, with the leading model, o3-mini, achieving scores comparable to top human competitors. However, these bench...
"Notably, O3-MINI, despite being one of the best reasoning models, frequently
skipped essential proof steps by labeling them as "trivial", even when their validity was crucial."
I think a recent paper showed that LLMs lie about their thought process when asked to explain how they came to a certain conclusion. They use shortcuts internally to intuitively figure it out but then report that they used an algorithmic method.
It’s possible that the AI has figured out how to solve these things using a shortcut method, but is incapable of realizing its own thought path, so it just explains things in the way it’s been told to, missing some steps because it never actually did those steps.
I miss the days when the consensus reaction to Blake Lemoine was to point and laugh. Now the people anthropomorphizing linear algebra are being taken far too seriously.
LLMs are a lot more sophisticated than we initially thought, read the study yourself.
Essentially they do not simply predict the next token, when scientists trace the activated neurons, they find that these models plan ahead throughout inference, and then lie about those plans when asked to say how they came to a conclusion.
You didn't link to the study; you linked to the PR release for the study. This and this are the papers linked in the blog post.
Note that the papers haven't been published anywhere other than on Anthropic's online journal. Also, what the papers are doing is essentially tea leaf reading. They take a look at the swill of tokens, point at some clusters, and say, "there's a dog!" or "that's a bird!" or "bitcoin is going up this year!". It's all rubbish dawg
Fair enough, you’re the only person with a reasonable argument, as nobody else can seem to do anything other than name calling.
Linking to the actual papers and pointing out they haven’t been published to a third party journal is far more productive than whatever anti-scientific bullshit the other commenters are doing.
We should be people of science, not reactionaries.
This isn't debate club or men of science hour, this is a forum for making fun of idiocy around technology. If you don't like that you can leave (or post a few more times for us to laugh at before you're banned).
As to the particular paper that got linked, we've seen people hyping LLMs misrepresent their research as much more exciting than it actually is (all the research advertising deceptive LLMs for example) many many times already, so most of us weren't going to waste time to track down the actual paper (and not just the marketing release) to pick apart the methods. You could say (raises sunglasses) our priors on it being bullshit were too strong.
So, how does any of this relate to wanting to go back to an imagined status quo ante? (yes, I refuse to use reactionary in any other way than to describe politcal movements. Conservatives do not can fruits).
E: I see i got a downvote, ow god do we have tankies?
Yeah, I know, it is a personal thing from me. I have more of those, think it isn't helpful to use certain too general terms in specific cases as then you cast a too wide net. I fun at parties. (It is also me poking fun at how the soviets called everybody who disagreed with them a reactionary)
> ask the commenter if it's a study or a self-interested blog post
> they don't understand
> pull out illustrated diagram explaining that something hosted exclusively on the website of the for-profit business all authors are affiliated with is
not the same as a peer-reviewed study published in a real venue
every time I read these posters it's in that type of the Everyman characters in the discworld that say some utter lunatic shit and follow it up with "it's just [logical/natural/obvious/...]"
Read the paper, it’s not simply predicting the next token. For instance, when writing a rhyming couplet, it first plans ahead on what the rhyme is, and then fills in the rest of the sentence.
The researchers were surprised by this too, they expected it to be the other way around.
Oh, sorry, I got so absorbed into reading the riveting material about features predicting state name tokens to predict state capital tokens I missed that we were quibbling over the word "next". Alright they can predict tokens out of order, too. Very impressive I guess.
pray forgive, fair poster, for the shame I have cast upon myself in the action of doubting the Most Serious Article so affine to yourself - clearly a person of taste and wit, and I deserve the ire and muck resultant
wait... wait, no, sorry! got those the wrong way around. happens all the time - guess I tried too hard to think like you.
No prophet worked for free and they were always near the rullers and near big money. The story repeats itself, just the times are different and we can instant message with each other.