Microsoft co-authored paper suggests the regular use of gen-AI can leave users with a 'diminished skill for independent problem-solving' and at least one AI model seems to agree
Nope, it's just a black box's best guess as to what the reasoning should look like.
Sort of how in an exam you give your best guess for an answer then jotting down some "working out" that you think looks sort-of correct and scraping enough marks to pass.
Now imagine you're not just trying to pass one question in one test in one subject but one question out of millions of possible questions in hundreds of thousands of possible subjects AND you experience time 5 million times slower than the examiner AND you had 3 years (in examiner time) to practice your guesswork.
That's it. That's all this AI bullshit is doing. And people are racing to achieve the best monkey typewriter that requires the fewest bananas to work.
To agree with you in different words, I would you argue that you can compare it to a calculator. Without the reasoning, a calculator is basically useless. I can tell you that 1.1(22 * 12 * 3) = 871.2 but it's impossible to know what that number means or why it's important from the information there. An LLM works the same way, I give it an equation ("prompt") and it does some math to give me a response which is useless without context. It doesn't actually answer the words in the prompt, it does (at best) guess-work based off the "value" of the text