There's an optimistic and a cynical perspective there.
Optimist yes.
Cynicism says, these LLMs are just statistical generation models that create outputs that are statistically similar to the training data in relation to a prompt.
But it’s really good at spitting out JavaScript code that works the first time you run it. Of all the languages I have tried an LLM assistant with, the JavaScript output is the best. Im guessing that’s because it had almost every working webpage on the internet to learn from.
I mention this because how is being able to construct working code from a plain language description not a type of intelligence? Perhaps a narrow form, but the proof is in the pudding, it outputs working code that fits an arbitrary purpose.
Just bringing that up for discussion. I don’t really care if LLM are ‘intelligent’ or not, but the utility is obvious. Even if the LLM isn’t smart, it still speeds progress by acting as an extension of my own so called intelligence.