We use words to describe our thoughts and understanding. LLMs order words by following algorithms that predict what the user wants to hear. It doesn't understand the meaning or implications of the words it's returning.
It can tell you the definition of an apple, or how many people eat apples, or whatever apple data it was trained on, but it has no thoughts of it's own about apples.
That's the point that OOP was making. People confuse ordering words with understanding. It has no understanding about anything. It's a large language model - it's not capable of independent thought.
I think that the question of what "understanding" is will become important soon, if not already. Most people don't really understand as much as you might think we do, an apple for example has properties like flavor, texture, appearance, weight and firmness it also is related to other things like trees and is in categories like food or fruit. A model can store the relationship of apple to other things and the properties of apples, the model could probably be given "personal preferences" like a preferred flavor profile and texture profile and use this to estimate if apples would be preferred by the preferences and give reasonings for it.
Unique thought is hard to define and there is probably a way to have a computer do something similar enough to be indistinguishable, probably not through simple LLMs. Maybe using a LLM as a way to convert internal "ideas" to external words and external words to internal "ideas" to be processed logically probably using massive amounts of reference materials, simulation, computer algebra, music theory, internal hypervisors or some combination of other models.