Sounds like "the purpose of the system is what it does." Every invention that gets funding does that. I watched the development of ML techniques over the last decade or so; none of the researchers developing AI would have said that was the purpose. They had much loftier hopes for AI.
It's also useful for finding stuff in large documents or codebases. For example, I was recently trying to understand how different concepts in a paper related to each other, and LLM was able to find the relevant parts of the paper which helped me piece things together.
I know Google sucks, but I will give a little credit to the newer Pixels' "Circle to Search" feature. It's nice to pull up an image and just circle search it and see where it came from, or the context behind it, etc etc.
Circle Search and the two you mention are probably the only actually useful things to come out of AI so far.
Because those "AI" are very good at writing keyword-ridded gibberish making SEO spammers obsolete. 1 SEO spam admin can "work" as a hundred human shit article concocters.
The same thing with artists. They used to get a few bucks for primitive images. Most of those artists are actually unable to draw anything but "primitive images". "AI" throws them away from the market.
I understand it as being similar to industrial workers opposing automation in factories under Capitalism, where technological progress ends up ultimately serving Capital, rather than supporting the proletariat. Office Workers have largely not had this same struggle until now, and are engaging with this contradiction for the first time.
This is further compounded by increased power costs in a climate where that isn't abundant, the dumping of Finance Capital into the sector just to chase profits from an emerging, rather than established market before the TRPF makes profits more scarce leading to over-application, and the lack of compensation for artists and writers that end up training these models.
Exactly, the hate for AI is reactionary in nature. What people are actually upset about is how this tech ends up being applied under capitalism, and that's where the anger should be directed. It's also worth noting how differently AI is applied in China where it's predominantly used in industry and robotics. Even stuff like LLMs are being applied towards socially useful purposes like improving healthcare or government services. There's also a big difference in the way it's being developed with Chinese companies treating AI as a commodity, often releasing models as open source and aiming to optimize them for efficiency, while western approach has been to try and make them into services that can be monetized.
Personally, I've found AI to be a very useful tool for coding. It's sped up my workflow significantly because it's able to handle a lot of boilerplate. It's particularly good for stuff like making UIs quickly. I can throw some sample JSON at a model and have it produce a decent looking React component. It used to take me hours to figure out styling and handling different behaviors, which I find really tedious to do. I also find it's very handy for discovering language features. I haven't had to work with JavaScript for a long time, and the language evolved significantly since I last touched it. Now I have a project using it at work, and I can work with the language much faster without having to constantly hunt for how to do a particular thing using it.
My experience is that this is already a useful tool, and it's only going to keep getting better going forward. At the same time, it's not magic, and you still have to learn how to get the most out of it and how to apply it effectively.
And a couple of more articles I can recommend that have good takes on the subject.
While AI offers transformative potential, significant criticisms highlight its drawbacks. Current systems often perpetuate biases embedded in training data, leading to discriminatory outcomes in hiring, law enforcement, and lending. The environmental cost of training large models—like massive energy consumption and carbon emissions—raises sustainability concerns. Automation driven by AI threatens job displacement, exacerbating economic inequality, while opaque "black-box" algorithms undermine accountability in critical domains like healthcare or criminal justice. Privacy erosion, through pervasive surveillance and data exploitation, further fuels distrust. Though AI’s capabilities are impressive, its unchecked deployment risks deepening societal inequities and prioritizing efficiency over ethical considerations.