OpenAI’s big pitch for its new o1 LLM, a.k.a. “Strawberry,” is that it goes through an actual reasoning process to answer you. The computer is alive! The paperclip apocalypse is imminen…
That’s OpenAI admitting that o1’s “chain of thought” is faked after the fact. The “chain of thought” does not show any internal processes of the LLM — o1 just returns something that looks a bit like a logical chain of reasoning.
I think it's fake "reasoning" but I don't know if (all of) OpenAI thinks that. They probably think hiding this data prevents cot training data from being extracted. I just don't know how deep the stupid runs.