Where's the Shovelware? Why AI Coding Claims Don't Add Up
Where's the Shovelware? Why AI Coding Claims Don't Add Up

Where's the Shovelware? Why AI Coding Claims Don't Add Up

Where's the Shovelware? Why AI Coding Claims Don't Add Up
Where's the Shovelware? Why AI Coding Claims Don't Add Up
People are spending all this time trying to get good at prompting and feeling bad because they’re failing.
This whole thing is bullshit.
So if you're a developer feeling pressured to adopt these tools — by your manager, your peers, or the general industry hysteria — trust your gut. If these tools feel clunky, if they're slowing you down, if you're confused how other people can be so productive, you're not broken. The data backs up what you're experiencing. You're not falling behind by sticking with what you know works.
AI is not the first technology to do this to people. I've been a software engineer for nearing 20 years now, and I've seen this happen with other technologies. People convinced it's making them super productive, others not getting the same gains and internalizing it, thinking they're to blame rather than the software. The Java ecosystem has been full of shitty technologies like that for most of the time Java has existed. Spring is probably one of the most harmful examples.
The best use I've found for "AI" in coding is making/updating readme files. If it can save me 30 min of tech debt I don't have time for anyway that's one positive use case.
I've given it old and new code (along with the existing readme if it exists) then said provide markdown for an updated readme with a changelog. Using that as a jumping point I'd proof read it and add a few sentences or delete one or two things.
Who'd've thunk large language models would be good at language and not coding.
(I know, programming has languages too but there is far less available data on those languages used "talking" to each other.)
I'm learning coding right now in college and my professors are making sure we know how to effectively work with LLMs when coding.
It should never be used for generating code for anything other than examples. One should always remember it will answer what is asked and nothing more, and must make considerations for this when adapting generated code for your uses. Its best to treat it like a frontend for parsing documentation and translating it into simpler language, or translating a function in one language to another.
Fantastic approach.
“Well, it’s all website-driven, and people don’t really care about domain names these days; it’s all subdomains on sites like Vercel.” Shut up. People love their ego domains.
Based
I like the article, but that one almost seems like a strawman. Is anyone actually arguing that?
Probably not. Seems more like he's pre-emptively anticipatibg counter-arguments. Which is fair. He's the one who put up a chart of TLD registrations as evidence of his point.
I have no doubt it 10xs developers who could produce 0 code without it
10×s developers who could produce 0 code without it
Let me see; ten times nothin', add nothin', carry the nothin'…
AI has been good at auto-completing things for me, but it almost always suggests things I already knew without even web searching. If I try to get advice about things I know nothing about (code wise) it's a really bad teacher, skips steps, and makes suggestions that don't work at all.
I'm guessing there's been no software explosion because AI is really only good for the "last 20%" of effort and can't really breach 51% where it's doing the majority of the driving.
Apropos to use the term "driving" I feel. Autonomous vehicles have largely been successful because the goal is clear (i.e. "take me to the grocery store") and there's a finite number of paths to reach the goal (no off-roading allowed). In programming, even if the goal is crystal clear, there really are an infinite number of solutions. If the driver (i.e. developer) doesn't have a clear path and vision for the solution then AI will only add noise and throw you off track.
Also it's just wrong a lot when I ask things I don't know.
"Use private_key
parameter instead of pkey
."
Alright, cool, good tip.
"Unknown parameter private_key
."
jim face
That’s a great read. I’ve used AI a few times, and I’ve never used any of the code it’s spat out. The only thing it’s helped me with was telling me how to solve a bug when I knew what the problem was, but didn’t know the right RFC the solution was in (RFC 2047, about encoding non ASCII text in email headers, if you’re interested).
One big issue I have with AI is how utterly reliant on RegEx it is. Everything can be solved with a RegEx! Even if it’s a terrible solution with horrendous performance implications, just throw more RegEx at it! Need to parse HTML? Guess what! RegEx!
I've been using the AI to help me with some beginner level rust compilation checks recently.
I never once got an accurate solution, but half the time it gave me a decent enough keyword to google or broken pattern to fix myself. The other half of the time it kept giving me back my own code proudly telling me it fixed it.
Don't worry though, AGI is right around the corner. Just one more trillion dollars bro. One trillion and we'll provide untold value to the shareholders bro. Trust me bro.
The RegEx thing is so true in my experience. I started working on a Neovim plugin to make editing injected code easier, and instead of suggesting Treesitter integration it wanted to create its own parser using RegEx...
it's great at interpreting stack traces/logs/crash reports
Have you tried telling it not to suggest a regex solution?
I’ve never seen it recommend a solution using regex. And I’ve had it provide a lot of useful code. Perhaps you need to look into prompt engineering training?
AI coding assistants have made my life a lot easier. I've created multiple personal projects in a day that would've taken me multiple days of figuring out frontend stuff.
It's also helped me in my work, especially in refactoring. I don't know how y'all are using them, but I get a lot of efficient use out of them.
How dare you! Can't you see there's a circle jerk in progress?
I've had "success" with using them for small one-off projects where I don't care too much about correctness, efficiency, or maintainability. I've tried using various AI tools (Copilot, Cursor agents, etc) for more serious projects where I do care about those things, and it was counter-productive (as studies have shown).
Hmm, I was curious if ChatGPT still gives inefficient code when asking it to write quicksort in Python, and it still does:
def quicksort(arr): if len(arr) <= 1: # Base case return arr pivot = arr[len(arr) // 2] # Choose middle element as pivot left = [x for x in arr if x < pivot] # Elements less than pivot middle = [x for x in arr if x == pivot] # Elements equal to pivot right = [x for x in arr if x > pivot] # Elements greater than pivot return quicksort(left) + middle + quicksort(right)
That's not really quicksort. I believe that has a memory complexity of O(n log n) on the average case, and O(n^2) for the worst case. If AI does stuff like this on basic, well-known algorithms, it's likely going to do inefficient or wrong stuff in other places. If it's writing something someone is not familiar with, they may not catch the problems/errors. If it's writing something someone is familiar with, it's likely faster for them to write it themselves rather than carefully review the code it generates.
Important metrics.
Good to have some hard data!
I think it's both true that you can't really write an entire app with just AI... At least not easily.
But also I don't buy that AI doesn't make me more productive. I'm not allowed to use it on my actual code but I have used it several times to generate one-off scripts and visualisations and for those it can easily save hours. They aren't software I need to edit myself though.
Pretty much everyone I've talked to about this says the same thing. LLMs are useful for one-off scripts or quickly generating boilerplate. It just turns out that those tasks don't make up the majority of programming work unless you are in a bullshit job anyway.
We aren't yet great at knowing when LLM will save time and when it will inflate time.
How dare you!
Appreciate you had the awareness to delete the comment before we got around to the report. It was still a breach of the instance's Code of Conduct (1.1, 3.2) and repeated breaches may result in a temporary ban.
ELI5? What's going on here? Also, the substack author is Mike Judge?
If AI coding is causing such a revolution, why are we not seeing a sharp increase in code production?
(Backed up with data)
It’s too neck-in-neck.
Do you feel that turducken-style programming and differences in the current models' ability to handle declarative vs imperative impacts realization of productivity gains?
a great read. i have recently gave up on copilot completion completely, and i must say im not slower? like the energy i invested in proofreading the completed code can be just as well spent learning more editor-fu to be more proficient at editing code
also even the completion sucks, it introduced a lot of bugs in my code that i realized a lot after, because the code looked “okay-ish” and then i had to spend a lot of time manually testing
and ad prompting, i dont understand why people use chatbots to ask them anything? i have never received a correct answer, on any topic