Developer survey shows trust in AI coding tools is falling as usage rises
Developer survey shows trust in AI coding tools is falling as usage rises

Developer survey shows trust in AI coding tools is falling as usage rises

Developer survey shows trust in AI coding tools is falling as usage rises
Developer survey shows trust in AI coding tools is falling as usage rises
Usage is rising because corporate executives started getting kickbacks and thinking they could cut staff by implementing it. But developers who have actually had to use it have realized it can be useful in a few scenarios, but requires a ton of review of anything it writes because it rarely understands context and often makes mistakes that are really hard to debug because they are subtle. So anyone trying to use it for a language or system they don't understand well is going to have a hard time.
because it rarely understands context
It never understands context.
And it cannot understand context because it does not think, it’s just an expensive prediction tool
Counterpoint: they want number go up.
Pro Tip: it doesn't even matter if number go up, when they know how to suck up to even higher-ups.
Executives are getting kickbacks? I've gotta do some research here.
That's not true. If you give it context, it understands and retains context quite well. The thing is that you can't just say "write code for me" and expect it to work.
Also, certain models are better than certain tasks than others.
This is a true statement.
There's relatively little debate among developers that the tools are or ought to be useful,
Yes there is. No one wants to listen to us. I've had 3 levels of people above me ask me how I've incorporated AI into my workflow. I don't get any pushback because my effectiveness is well known, yet the top down edict that everyone else use these shitty tools continues unabated.
Where I work, my skip-levels have started debating on whether they want to consider if an engineer uses AI as a factor for reviews, pay raises and incentives, and are tracking who uses it by way of licenses.
It's a bit ridiculous IMO because they're essentially saying "we don't care if slop makes it into the code base, so long as you are using AI, you will remain gainfully employed."
I've seen a lot of stupid shit over my career but this AI zealotry just takes the cake.
Everyone is so convinced these tools will make software get made faster, but I'm not even convinced that it gives even a modest benefit. For me personally they definitely don't, and it seems to lead junior devs horribly astray as often as it helps speed them up.
It feels like I'm not even looking at the same reality as everyone else at this point.
I mean "ought to be useful," sure that would be nice. They ain't, but perhaps "ought to be."
AI coding tools are a great way to generate boilerplate, blat out repetitive structures and help with blank page syndrome. None of those mean you can ignore checking what they generate. They fit into the same niche as Stackoverflow - you can yoink all the 3rd party code snippets you want, but it's going to be some work to get them running well even if you understand what they're doing, and if you neglect this step hoo boy can it come back to bite you!
Great, a ridiculously expensive lorum ipsum generator.
LLMs will always fail to help developpers because reviewing is harder than writting. To review code effectivly you must know the how and why of the implementation in front of you and LLMs will fail to provide you with the necessary context. On top of that a good review evaluate the code in relation to the project and other goal the LLM will not be able to see.
The only use for LLM in coding is as an alternative search bar for stackoverflow
The only use for LLM in coding is as an alternative search bar for stackoverflow
I'd argue it can also be useful as a form of autocomplete, or writing whatever boilerplate code; that still isn't outsourcing your thinking to the text predictor.
There was trust?
I just see this as future job security.
Oh, AI fucked your codebase? Well, it'll take twice as much time to undo it all and fix it, my rate is $150/hr. Thanks.
That's and awful long time to do git checkout HEAD~10
Shhhh, they don't need to know this.
The miracle slop machine miraculously produces garbage.
Shocker.
Geez louise, maybe it's not intelligence after all. I think you'd need sentience to apply the word intelligence. Those wacko marketing people.
There is an amazing quirk of the LLM, whenever I don't know about the topic, and refuse to google, it gives me some useful answers, but if I ask it something I know about, the answers are always stupid and wrong. I asked a computer about it but it said that everything is normal and I should buy better subscription, so there's that.
It’s not a quirk of LLMs, it’s a quirk of human cognitive biases.
See: Gell-Mann amnesia effect.
That's the joke, yes.