Model Evaluation and Threat Research is an AI research charity that looks into the threat of AI agents! That sounds a bit AI doomsday cult, and they take funding from the AI doomsday cult organisat…
The point isn’t to increase employee productivity (or employee happiness, obviously).
The point is to replace most employees.
In order for that to happen, LLM-assisted entry-level developers merely need to be half as good as expert human unassisted developers, at scale, at a lower aggregate cost.
LLM-assisted entry-level developers merely need to be half as good as expert human unassisted developers
This isn't even close to existing.
The theoretical cyborg-developer at that skill level would surely be introducing horrible security bugs or brittle features that don't stand up to change
Sadly i think this is exactly what many CEOs are thinking is going to happen because they've been sold on openai and anthropic lies that it's just around the corner
Point 3. is what I’m getting at. They desperately want to unload expensive developers, so bad. They’ll try it, and keep trying it, even as products suffer, in the hopes of driving down labor costs.
That's a fool's errand. Only those highly skilled experienced devs are going to spot certain subtle issues or architectural issues that will bite everyone the moment the product grows larger.
as one of the people representing the “hero group” (for lack of a better term) your comment references: eh. I didn’t start out with all this knowledge and experience. it built up over time.
it’s more about the mode of thinking and how to engage with a problem, than it is about specific “highly skilled” stuff. the skill and experience help/contribute, they refine, they assist in filtering
the reason I make this comment is because I think it’s valuable that anyone who can do the job well gets to do the thing, and that it’s never good to gatekeep people out. let’s not unnecessarily contribute to imposter syndrome
But when a mid-tier or entry level dev can do 60% of what a senior can do
This simply isn't how software development skill levels work. You can't give a tool to a new dev and have them do things experienced devs can do that new devs can't. You can maybe get faster low tier output (though low tier output demands more review work from experienced devs so the utility of that is questionable). I'm sorry but you clearly don't understand the topic you're making these bold claims about.
Even pre AI I had to deal with a project where they shoved testing and compliance at juniors for a long time. What a fucking mess it was. I had to go through every commit mentioning Coverity because they had a junior fixing coverity flagged "issues". I spent at least 2 days debugging a memory corruption crash caused by such "fix", and then I had to spend who knows how long reviewing every such "fix".
And don't get me started on tests. 200+ tests, of them none caught several regressions in handling of parameters that are shown early in the frigging how-to. Not some obscure corner case, the stuff you immediately run into if you just follow the documentation.
With AI all the numbers would be much larger - more commits "fixing coverity issues" (and worse yet fixing "issues" that LLM sees in code), more so called "tests" that don't actually flag any real regressions, etc.
Yeah, the glorious future where every half-as-good-as-expert developer is now only 25% as good as an expert (a level of performance also known as being "completely shit at it"), but he's writing 10x the amount of unusable shitcode.
You scoff, but this is exactly the future CEOs and upper management absolutely want.
Why? Because labor is too expensive and “entitled” (wanting things like time off, health insurance, remote work, and so on).
They will do everything they can to squeeze labor and disempower the bargaining capability that knowledge workers have.
Why do you think Microsoft has been trying to screengrab everything that knowledge workers do? To train their models to do that work instead. Why did they just lay off thousands of workers and direct something like $80 billion towards AI investments?
You know why.
Quality doesn’t matter. Income minus expenses matter. You are an expense. They will do everything they can to replace you as soon as they catch even a whiff of economic viability (or even before), because even if it’s more costly right now, it can drive down labor costs by putting the squeeze on employees.
This is a very "nine women can make a baby in one month".
The idea that there can even be two half as good developers is a misunderstanding of how anything works. If it worked like that, the study would be a dud because people could just run two AIs for 160% productivity.