Short term yes; long term probably not. All the dipshit c-suites pushing the “AI” worker replacement initiatives are going to destroy their workforces and then realize that LLMs can’t actually reliably replace any of the workers they fired. And I love that for management.
You're referring to something that is changing and getting better constantly. In the long term LLMs are going to be even better than they are now. It's ridiculous to think that it won't be able to replace any of the workers that were fired. LLMs are going to allow 1 person to do the job of multiple people. Will it replace all people? No. But even if it allows 1 person to do the job of 2 people, that's 50% of the workforce unemployed. This isn't even mentioning how good robotics have gotten over the past 10 years.
It can potentially allow 1 worker to do the job of 10. For 9 of those workers, they have been replaced. I don't think they will care that much for the nuance that they technically weren't replaced by AI, but by 1 co-worker who is using AI to be more efficient.
That doesn't necessarily mean that we won't have enough jobs any more, because when in human history have we ever become more efficient and said "ok, good enough, let's just coast now"? We will just increase the ambition and scope of what we will build, which will require more workers working more efficiently.
But that still really sucks because it's not going to be the same exact jobs and it will require re-training. These disruptions are becoming more frequent in human history and it is exhausting.
We still need to spread these gains so we can all do less and also help those whose lives have been disrupted. Unfortunately that doesn't come for free. When workers got the 40 hour work week it was taken by force.
My colleagues are starting to use AI, it just makes their code worse and harder to review. I honestly can’t imagine that changing, AI doesn’t actually understand anything.
There's actually a someone I work with that's basically that. Essentially, their job is to communicate across teams to coordinate product releases. They're not involved in deciding what's in those releases, they just needs to know what's going in each one and when it's happening. They also do something with budget approvals.
Basically, they just look at Jira and spreadsheets, and I didn't think they really make changes to either. We could 100% do without that role, at least from my perspective.
Then again, we're not worried about job security here. We have good funding, we're expanding our team, and my boss is generally against AI. But if I had to suggest someone to let go, it would be that person. It's a really weird role and they don't really have any unique skills, but whatever.
Meanwhile, I'm at my job trying to get an instance of a machine that can automatically SFTP somewhere as part of a script like it's 1998 and I need a shell account from my dialup connection.
That is very true, especially when it comes to any administrative task.
However I'd argue that these jobs are less likely to be replaced, as these jobs are born out of a system that is favoring bureaucracy for the sake of bureaucracy over efficiency.
Challenging that system would result in a shift in the power dynamics, often towards subordinates, which, of course, wouldn't really be accepted by leading positions.
Microsoft will soon allow businesses and developers to build AI-powered Copilots that can work like virtual employees and perform tasks automatically.
Instead of Copilot sitting idle waiting for queries, it will be able to do things like monitor email inboxes and automate a series of tasks or data entry that employees normally have to do manually.
It’s a big change in the behavior of Copilot in what the industry commonly calls AI agents, or the ability for chatbots to intelligently perform complex tasks autonomously.
Microsoft’s argument that it only wants to reduce the boring bits of your job sounds idealistic for now, but with the constant fight for AI dominance between tech companies, it feels like we’re increasingly on the verge of more than basic automation.
You can build Microsoft’s Copilot agents with the ability to flag certain scenarios for humans to review, which will be useful for more complex queries and data.
We constantly see AI fail on basic text prompts, provide incorrect answers to queries, or add extra fingers to images, so do businesses and consumers really trust it enough to automate tasks in the background?
The original article contains 842 words, the summary contains 188 words. Saved 78%. I'm a bot and I'm open source!
Probably worth noting that this bot uses LSA and, at least as I understand it, is quite different from gpts and the current wave of "AI" as discussed in this article.
Incoming data stream! This one is from a scientist! Ok Chat-GPT, get ready to learn how to be a scientist!
Incoming data stream! From a mom.....oh shit Chat-GPT.... run! It's a single mom! Oh hold on! She found someone to love again! Nope, he was just using her for her beauty again. Well maybe just learn to do all chores, feed the kids, go to work, get sexually abused, fall a sleep, do all chores and make-up! And figure out if only fans really does work! And more chores.