You can hardly get online these days without hearing some AI booster talk about how AI coding is going to replace human programmers. AI code is absolutely up to production quality! Also, you’re all…
I got an AI PR in one of my projects once. It re-implemented a feature that already existed. It had a bug that did not exist in the already-existing feature. It placed the setting for activating that new feature right after the setting for activating the already-existing feature.
Where is the good AI written code? Where is the good AI written writing? Where is the good AI art?
None of it exists because Generative Transformers are not AI, and they are not suited to these tasks. It has been almost a fucking decade of this wave of nonsense. The credulity people have for this garbage makes my eyes bleed.
Coding is hard, and its also intimidating for non-coders. I always used to look at coders as kind of a different kind of human, a special breed. Just like some people just glaze over when you bring up math concepts but are otherwise very intelligent and artistic, but they can't bridge that gap when you bring up even algebra. Well, if you are one of those people that want to learn coding its a huge gap, and the LLMs can literally explain everything to you step by step like you are 5. Learning to code is so much easier now, talking to an always helpful LLM is so much better than forums or stack overflow. Maybe it will create millions of crappy coders, but some of them will get better, some will get great. But the LLM's will make it possible for more people to learn, which means that my crypto scam now has the chance to flourish.
The general comments that Ben received were that experienced developers can use AI for coding with positive results because they know what they’re doing. But AI coding gives awful results when it’s used by an inexperienced developer. Which is what we knew already.
That should be a big warning sign that the next generation of developers are not going to be very good. If they're waist deep in AI slop, they're only going to learn how to deal with AI slop.
As a non-programmer, I have zero understanding of the code and the analysis and fully rely on AI and even reviewed that AI analysis with a different AI to get the best possible solution (which was not good enough in this case).
What I'm feeling after reading that must be what artists feel like when AI slop proponents tell them "we're making art accessible".
The headlines said that 30% of code at Microsoft was AI now! Huge if true!
Something like MS word has like 20-50 million lines of code. MS altogether probably has like a billion lines of code. 30% of that being AI generated is infeasible given the timeframe. People just ate this shit up. AI grifting is so fucking easy.
My current theory is that the main difference between open source and closed source when it comes to the adoption of “AI” tools is that open source projects generally have to ship working code, whereas closed source only needs to ship code that runs.
I’ve heard so many examples of closed source projects that get shipped but don’t actually work for the business. And too many examples of broken closed source projects that are replacing legacy code that was both working just fine and genuinely secure. Pure novelty-seeking
I use gpt to give me snippets of code (not in my ide, I use neovim btw), check my stuff for typos/logical errors, suggest solutions to some problems, debugging, and honestly I kinda love it. I was learning programming on my own in 2010s, and this is so much better than crawling over wikis/stackoverflow. At least for me, now, when I already have an intuition for what is a good code.
Anyone who says llm will replace programmers in 1-2 years is either stupid or a grifter.
Had a presentation where they told us they were going to show us how AI can automate project creation. In the demo, after several attempts at using different prompts, failing and trying to fix it manually, they gave up.
I don't think it's entirely useless as it is, it's just that people have created a hammer they know gives something useful and have stuck it with iterative improvements that have a lot compensation beneath the engine. It's artificial because it is being developed to artificially fulfill prompts, which they do succeed at.
When people do develop true intelligence-on-demand, you'll know because you will lose your job, not simply have another tool at your disposal. The prompts and flow of conversations people pay to submit to the training is really helping advance the research into their replacements.
I'm a pretty big proponent of FOSS AI, but none of the models I've ever used are good enough to work without a human treating it like a tool to automate small tasks. In my workflow there is no difference between LLMs and fucking grep for me.
People who think AI codes well are shit at their job
As a non-programmer, I have zero understanding of the code and the analysis and fully rely on AI and even reviewed that AI analysis with a different AI to get the best possible solution (which was not good enough in this case).
This is the most entertaining thing I've read this month.
I treat AI as a new intern that doesn't know how to code well. You need to code review everything, but it's good for fast generation. Just don't trust more than a couple of lines at a time.
If AI code was great, and empowered non-programmers, then open source projects should have already committed hundreds of thousands of updates. We should have new software releases daily.
We submit copilot assisted code all the time. Like every day. I'm not even sure how you'd identify ours. Looks almost exactly the same. Just less work.