I was lucky enough to get in on my company's beta test for copilot.
When I hear people say it's bad, all that tells me is that they are either completely ignorant and have never really used it, or they aren't good at learning how to use new tools.
The example shown is setting a timer, then copilot suggests timeright value.
Contextually, it is bad autocomplete.
In practice, chatgpt4 is incapable of producing code to my coding standards. Edit: to clarify, its incapable of doing that in a timely enough manner that it saves me any time.
The example shown was specifically selected because it's funny, not because it's representative.
The fact that you called the tool "chatgpt4" suggests you're not experienced with copilot. They're not the same thing even if they're using similar LLMs as a component.
That paragraph is on its own because it is a different topic. In this case I was using my own experience experimenting with chatgpt4 as to why I won't be using it any time soon.
That's what I need most of the time, though. I don't see these AI things as replacing programmers or writing large chunks of code. I just see them as an improvement over the autocompletion/IntelliSense features we're all using already.