I know this is just my tiny view on things, but I've been testing code from a very experienced Dev, who was recently instructed to use AI coding tools in their work (Cursor).
Functionality in our product is now breaking in weird and wonderful ways. Completely new 'WTF!' moments. It's hard to describe.
Core behavior that I've taken for granted - things I didn't realise could go wrong, are.
It reminds me of when after particular iOS update many years back, (for certain scenarios) Apple's own calculator wasn't doing addition correctly.
For me, it's both fascinating and unnerving. Like some unfathomable cosmic horror.
I don't use AI tools when I code (my work IDE is way too old & I prefer it that way), but elsewhere where I work they did a pilot of people trying Cursor for a number of months.
What they found was that it was useful as a first step in the process, but almost always required being checked by hand afterwards. Another thing was that "code efficiency" changes fell between 10% faster and 30% slower, averaging overall ~20% slower. But almost all participants reported feeling like they'd improved by 20% faster. It made them feel like they were working faster than they were, even though it seems to have been actively hindering them.
Busywork fills time and can feel productive. I found it a constant temptation as an eng and pm.
I could spend a couple of hours thinking hard about an actual problem that needs solving, orrrrrr I could fuck around with the bug database doing stuff that gets counted by my boss...
And bosses need to be on alert that they aren't giving out busywork and feeling good that their employees aren't staring into space/doodling/chatting any more (which is often what thinking looks like).
The whole LLM thing needs to be studied for all of the cognitive dark patterns they are exploiting. It's like a grift encyclopedia.
I know this is just my tiny view on things, but I've been testing code from a very experienced Dev, who was recently instructed to use AI coding tools in their work (Cursor).
Functionality in our product is now breaking in weird and wonderful ways. Completely new 'WTF!' moments. It's hard to describe.
Core behavior that I've taken for granted - things I didn't realise could go wrong, are.
It reminds me of when after particular iOS update many years back, (for certain scenarios) Apple's own calculator wasn't doing addition correctly.
For me, it's both fascinating and unnerving. Like some unfathomable cosmic horror.
I don't use AI tools when I code (my work IDE is way too old & I prefer it that way), but elsewhere where I work they did a pilot of people trying Cursor for a number of months.
What they found was that it was useful as a first step in the process, but almost always required being checked by hand afterwards. Another thing was that "code efficiency" changes fell between 10% faster and 30% slower, averaging overall ~20% slower. But almost all participants reported feeling like they'd improved by 20% faster. It made them feel like they were working faster than they were, even though it seems to have been actively hindering them.
Busywork fills time and can feel productive. I found it a constant temptation as an eng and pm.
I could spend a couple of hours thinking hard about an actual problem that needs solving, orrrrrr I could fuck around with the bug database doing stuff that gets counted by my boss...
And bosses need to be on alert that they aren't giving out busywork and feeling good that their employees aren't staring into space/doodling/chatting any more (which is often what thinking looks like).
The whole LLM thing needs to be studied for all of the cognitive dark patterns they are exploiting. It's like a grift encyclopedia.