thats because the main peddlers are the ceo/csuites of these tech companies, and the customers arnt people like you or me, its other corporate heads. in case of palintir it would be the government.
AI as it exists today is only effective if used sparingly and cautiously by someone with domain knowledge who can identify the tasks (usually menial ones) that don't need a human touch.
Some good examples from the bookkeeping/accounting industry is automating the matching of payments to the invoices and using AI to extract and process invoices.
This 1000x. I am a PHP developer, I found out about two months ago that the AI assistant is included in my Jetbrains subscription (All pack, it was a separate thing before). And recently found about Junie, their AI agent that has deep thinking (or whatever the hell it is called). I tried it the same day to refactor part of my test that had to migrated to stop using a deprecated function call.
To my surprise, it required only very minor changes, but what would've taken me about 3 hours was done in half an hour. What I also liked was that it actually asked if it can run a terminal command to verify thr test results and it went back and fixed a broken test or two.
Finally I have faith in AI being useful to programmers.
For a test, I took our dev exam (for potential candidates) and just sent it to see what it does just based on the document, and besides a few mistakes it even used modern tools and not some 5 year old stuff (like PSR standards) and implemented core systems by itself using well known interfaces (from said PSRs). I asked it to change Dependency Injection to use Symfony DI instead of the self-made thing, and it worked flawlessly.
Of course, the code has to be reviewed or heavily specified to make sure it does what it is told to, but all in all it doesn't look like just a gimmick anymore.
Absolutely, this matches my experience. I think this is also the experience of most coders who willingly use AI. I feel bad for the people who are forced to use it by their companies. And those who are laid off because of C-levels who think AI is capable of replacing an experienced coder.
As someone who has been a consultant/freelance dev for over 20 years now this is true. Lately I've been getting offers and contacts from places to essentially clean up the mess from LLMs/AI.
A lot of is pretty bad. It's a mess. But like I said I've been at it for awhile and I've seen this before when companies were offshoring anything and everything to India and surprise, surprise, they didn't learn anything. It's literally the exact same thing. Instead of an Indian guy that claims they know everything and will work for peanuts, it's AI pretty much stating the same shit.
I've been getting so many requests for gigs I've been hitting up random out of work devs on linkedin in my city and referring the jobs to them. I've burned through all my contacts that now I'm just reaching out to absolute strangers to get them work.
yes it's that bad (well bad for companies, it's fantastic for developers.)
They learned that by the time all of their shitty decisions ruin everything, they'll be able to bail with their golden parachute while everyone else has to deal with the fallout.
I imagine you aren't talking about large companies that just let ai loose in their code base. Are these like companies that fired half their staff and realized llms couldn't make up for the difference, or small companies that tried to make new apps without a proper team and came up short?
The BBC report cited mainly focused on the marketing industry, with the fixing mistake people being the copywriters. This gives a strong vibe of Madman, where you have the "old-fashioned" copywriters and the tension between market research.
Well what ends up happening is some company will have a CEO.
He'll make all the stupid decisions. But they're only stupid from everybody ELSES perspective.
From his perspective, he uses AI, tanks the companies future in the chase of large short term stock gains. Then he gives himself a huge bonus, leaves the company, gets hired somewhere else, and gets to say "See how that company is failing without me? That's because I bring value to the brand."
So he gets hired at the neeeext place, meanwhile that first company is failing because of the actions of a CEO no longer employed there, and whom bailed because he knew what was coming.
These actions aren't stupid. They're plotted corruption for the benefit of one.
I used to work with a supplier that hired a former Monsanto executive as their CEO. When his first agenda came out I told their sales team he was an idiot and to have fun looking for a new job a few months.
The CEO bailed after 2 years to start his own "consulting business."
1 year later the company lost 75% of their market share and was laying off people left and right. They are still afloat barely.
After a couple years "consulting", the CEO went to another company in 2023. He didn't bounce fast enough and got caught on this one. He was fired 2 weeks ago and the company shut their doors except for a handful of staff to facilitate the firesale of the companies assets.
What's really stupid about this cycle is that some of these fail-upward executives genuinely believe the crap they're spewing. Weirdly, I think I respect the grifting executives more
Edit: by grifting executives, I mean the ones who participate in that cycle you describe, and are aware of the harms they cause in their wake, but don't care because they've gotten good at knowing when to skip out
It's true, although the smart companies aren't laying off workers in the first place, because they're treating AI as a tool to enhance their productivity rather than a tool to replace them.
I don’t know if it even helps with productivity that much. A lot of bosses think developers’ entire job is just churning out code when it’s actually like 50% coding and 50% listening to stakeholders, planning, collaborating with designers, etc. I mean, it’s fine for a quick Python script or whatever but that might save an experienced developer 20 minutes max.
And if you “write” me an email using Chat GPT and I just read a summary, what is the fucking point? All the nuance is lost. Specialized A.I. is great! I’m all for it combing through giant astronomy data sets or protein folding and stuff like that. But I don’t know that I’ve seen generative A.I. without a specific focus increase productivity very much.
Productivity will go up, wages will remain the same, and no additional time off will be given to employees. They’ll merely be required to produce 4x as much and compensation will not increase to match.
It seems the point of all these machines and automation isn’t to make our individual lives easier and more prosperous, but instead to increase and maximize shareholder value.
If your job is just doing a lot of trivial code that just gets used once, yeah I can see it improving productivity.
If your job is more tackling the least trivial challenges and constantly needing to understand the edge cases or uncharted waters of the framework/tool/language, it’s completely useless.
This is why you get a lot of newbies loving AI and a lot of seniors saying it’s counter productive.
It's technically closer to Schrodinger's truth. It goes both ways depending on "when" you look at it. Publicly traded companies are more or less expected to adopt AI as it is the next "cheap" labor... so long as it is the cheapest of any option. See the very related: slave labor and it's variants, child labor, and "outsourcing" to "less developed" countries.
The problem is they need to dance between this experimental technology and ... having a publicly "functional" company. The line demands you cut costs but also increase service. So basically overcorrection hell. Mass hirings into mass firings. Every quarter / two quarters depending on the company... until one of two things becomes true: ai works or ai no longer is the cheapest solution. I imagine that will rubberband for quite some time. (saas shit like oracle etc)
In short - I'd not expect this to be more than a brief reprieve from a rapidly drying well. Take advantage of it for now - but I'd recommend not expecting it to remain.
The line demands you cut costs but also increase service.
The line demands it go up. It doesn't care how you get there. In many cases, decreasing service while also cutting costs is the way to do it so long as line goes up.
Jup. But the same goes for developers that go way too fast when setting up a project or library. 2-3 months in and everything is a mess. Weird function names, all one letter vars, no inversion of control, hardcoded things etc. Good luck fixing it.
This is what I fight against every goddamn day, and I get yelled at for fighting against it, but I’m not going to stop. I want to build shit that I can largely forget about (because, you know, it’s reliable and logically extensible and maintainable) after it gets to a mature state, and I’m not shy about making that known. This has led to more than a few significant conflicts over the course of my career. It has also led to me saying “I fucking told you so” more than a few times.
Stack Exchange coding is 5% finding solutions to try and 95% copy-pasting those solutions into your project, discovering why they don't work for you, and trying the next solution on the search list.
I worked in one of these companies. Within months, we went from a company I would be proud to recommend to friends to a service I would never use myself, just due to the horrendous route they took to hire overseas support.
The line of tech work I was in required about a month of training after passing the interview process, and even then you had to take a test at the end to prove you’d absorbed the material before you ever speak to a customer.
When they outsourced, they just bought a company of like 30 people in an adjacent industry and gave them a week of training. Our call queues were never worse and every customer was angry with everyone by the time they talked to someone who had training.
I don’t blame the overseas agents. I blame all the companies that treat them like cattle.
Or more generalized: management going all-in with their decisions, forgetting there is a sweet spot for everything, and then backtracking losing employee time and company money. Sometimes these cause huge backlash, like Wells Fargo pushy sales practices, or great loses, like Meta with Metaverse
McNamara fallacy at its finest. They hear figures and potential savings and then jump into the hype without considering the context. It is the same when they heard of lean manufacturing or Toyota way. Companies thought it is cost saving rather than process improvement.
What these companies didn't take the time to understand is, A.I. is a tool to make employees more efficient, not to replace them. Sadly the vast majority of these companies will also fail to learn this lesson now and will get rid of A.I. systems altogether rather than use them properly.
When I write a document for my employer I use A.I. as a research and planning assistant, not as the writer. I still put in the work writing the document, I just use A.I. to simplify the tedious data gathering and organizing.
I just use A.I. to simplify the tedious data gathering and organizing.
If you're conscientious, you check AI's output the same way a conscientious licensed professional checks the work of an assistant before signing their name to it.
If you're more typical... you're at even greater risk trusting AI than you are when trusting an assistant who is trying to convince your bosses that they can do your job better than you.
yes, 100%, do not use an LLM for anything you’re not prepared to vet and verify all of. The longer an LLM’s response the higher the odds it loses context and starts repeating or stating total gibberish or makes up data to keep going. If that’s what you want (like a list of fake addresses and phone numbers to prototype an app), great, but that’s about all it’s going to really do.
Nah, I came here to make this comment and you already have it well in hand. It's not really any different other than the marketing spin, though. Companies have always had bad code and hired specialists to sort it out. And over half of the specialists suck, too, and so the merry-go-round spins.
They should have just asked me. I knew that would be the result years ago. Writing has been on the screaming wall of faces while the faces also screamed it.
Management doesn't ask people they want to fire is firing them is a good idea. They themselves would lie like crazy to keep their job and assume therefore everything the developers say would be a lie too.
At least in my area they've decided to walk back the walk back.
They went from "Self checkouts are now only for ten items or less" to "Self checkouts are permanently closed" and now they've gone to "Self checkouts can be used for any number of items and also we added four more".