Recent math benchmarks for large language models (LLMs) such as MathArena indicate that state-of-the-art reasoning models achieve impressive performance on mathematical competitions like AIME, with the leading model, o3-mini, achieving scores comparable to top human competitors. However, these bench...
"Notably, O3-MINI, despite being one of the best reasoning models, frequently
skipped essential proof steps by labeling them as "trivial", even when their validity was crucial."
“Notably, O3-MINI, despite being one of the best reasoning models, frequently skipped essential proof steps by labeling them as “trivial”, even when their validity was crucial.”
LLMs achieve reasoning level of average rationalist
it's a very human and annoying way of bullshitting. I took every opportunity to crush this habit out of undergrads. "If you say trivial, obvious, or clearly, that usually means you're making a mistake and you're avoiding thinking about it"
This is actually an accurate representation of most "gifted olympiad laureate attempting to solve a freshman CS problem on the blackboard" students I've went to uni with.
Jumps to the front after 5 seconds from the task being assigned, bluffs that the problem is trivial, tries to salvage their reasoning for 5 minutes when questioned by the tutor, turns out the theorem they said was trivial is actually false, sits down having wasted 10 minutes of everyone's time.
I just remember a professor saying that after he filled the board with proofs and math. 'the rest is trivial' not sure if it was a joke, as I found none of it trivial. (and neither did the rest of the people doing the course).
I think a recent paper showed that LLMs lie about their thought process when asked to explain how they came to a certain conclusion. They use shortcuts internally to intuitively figure it out but then report that they used an algorithmic method.
It’s possible that the AI has figured out how to solve these things using a shortcut method, but is incapable of realizing its own thought path, so it just explains things in the way it’s been told to, missing some steps because it never actually did those steps.
I miss the days when the consensus reaction to Blake Lemoine was to point and laugh. Now the people anthropomorphizing linear algebra are being taken far too seriously.
LLMs are a lot more sophisticated than we initially thought, read the study yourself.
Essentially they do not simply predict the next token, when scientists trace the activated neurons, they find that these models plan ahead throughout inference, and then lie about those plans when asked to say how they came to a conclusion.