It's funny how people always quickly point out that an LLM wasn't made for this, and then continue to shill it for use cases it wasn't made for either (The "intelligence" part of AI, for starters)
People who think that LLMs having trouble with these questions is evidence one way or another about how good or bad LLMs are just don't understand tokenization. This is not a symptom of some big-picture deep problem with LLMs; it's a curious artifact like in a jpeg image, but doesn't really matter for the vast majority of applications.
You may hate AI but that doesn't excuse being ignorant about how it works.
I get the meme aspect of this. But just to be clear, it was never fair to judge LLMs for specifically this. The LLM doesn't even see the letters in the words, as every word is broken down into tokens, which are numbers. I suppose with a big enough corpus of data it might eventually extrapolate which words have which letter from texts describing these words, but normally it shouldn't be expected.
When we see LLMs struggling to demonstrate an understanding of what letters are in each of the tokens that it emits or understand a word when there are spaces between each letter, we should compare it to a human struggling to understand a word written in IPA format (/sʌtʃ əz ðɪs/) even though we can understand the word spoken aloud normally perfectly fine.
The AI was trained to answer 3 to this question correctly.
Wait until the AI gets burned on a different question. Skeptics will rightfully use it to criticize LLMs for just being stochastic parrots, until LLM developers teach their models to answer it correctly, then the AI bros will use it as a proof it becoming "more and more human like".