I’m not a developer, I do not work in a technology field, but I used to. I know Linux sysadmin, security, and a very small amount of python.
ChatGPT has allowed me to “write” code that I use everyday in my business. So, I’m not a developer, but it lets me do things I otherwise would not be able to.
My business is still too small to even consider hiring a developer, so it’s allowing me to grow my business.
I’m just writing this to point out that “devs” are not the only people using chatgpt to write code.
Chatgpt and other LLMs are fantastic technical task assistants but, and this is a big but, you need to treat their work the same way you'd treat work from a new intern. Verify the output before you trust it.
Jumping on the new shiny thing and relying on it over all the other tried and tested tools is the core re-occuring mistake in development.
What? This fantastical scenario has never happened. Name one other new development tool that has lead to the sort of issues you seem to think will happen. Debuggers? Auto-complete? Syntax highlighting? Better build tooling?
I never said a single tool causes issues. I said abandoning existing tools to only use the new thing is a problem.
And I said - when the hell has that ever happened? Ever?
See people who want to only use the newest frameworks, to the point of re-building projects when they come out.
See people who fixate on a single design pattern and insist on using it in every application.
I'm talking about development tools not platforms and libraries. An LLM is not replacing a framework. It's not replacing... Anything really.