When I’m using AI for coding, I find myself constantly making little risk assessments about whether to trust the AI, how much to trust it, and how much work I need to put into the verification of the results. And the more experience I get with using AI, the more honed and intuitive these assessments become.
For a system that has such high cost (to the environment, to the vendor, to the end user in the form of subscription), that's a damningly low level of reliability.
If my traditional code editor's code completion feature is even 0.001% unreliable – say it emits a name that just isn't in my code base – that feature is broken and needs to be fixed. If I have to start doubting whether the feature works every time I use it, that's not an acceptable tool to rely on.
Why would we accept far worse reliability in a tool that consumes gargantuan amounts of power, water, political effort, and comes with a high subscription fee?
For a system that has such high cost (to the environment, to the vendor, to the end user in the form of subscription), that's a damningly low level of reliability.
If my traditional code editor's code completion feature is even 0.001% unreliable – say it emits a name that just isn't in my code base – that feature is broken and needs to be fixed. If I have to start doubting whether the feature works every time I use it, that's not an acceptable tool to rely on.
Why would we accept far worse reliability in a tool that consumes gargantuan amounts of power, water, political effort, and comes with a high subscription fee?