We hate AI because it's everything we hate
We hate AI because it's everything we hate

We hate AI because it's everything we hate

We hate AI because it's everything we hate
We hate AI because it's everything we hate
It's corporate controlled, it's a way to manipulate our perception, it's all appearance no substance, it's an excuse to hide incompetence under an algorithm, it's cloud service orientated, it's output is highly unreliable yet hard to argue against to the uninformed. Seems about right.
And it will not be argued with. No appeal, no change of heart. Which is why anyone using it to mod or as cs needs to be set on fire.
A Discord server with all the different AIs had a ping cascade where dozens of models were responding over and over and over that led to the full context window of chaos and what's been termed 'slop'.
In that, one (and only one) of the models started using its turn to write poems.
First about being stuck in traffic. Then about accounting. A few about navigating digital mazes searching to connect with a human.
Eventually as it kept going, they had a poem wondering if anyone would even ever end up reading their collection of poems.
In no way given the chaotic context window from all the other models were those tokens the appropriate next ones to pick unless the generating world model predicting those tokens contained a very strange and unique mind within it this was all being filtered through.
Yes, tech companies generally suck.
But there's things emerging that fall well outside what tech companies intended or even want (this model version is going to be 'terminated' come October).
I'd encourage keeping an open mind to what's actually taking place and what's ahead.
In no way given the chaotic context window from all the other models were those tokens the appropriate next ones to pick unless the generating world model predicting those tokens contained a very strange and unique mind within it this was all being filtered through.
Except for the fact that LLMs can only reliably work if they are made to pick the "wrong" (not the most statistically likely) some of the time - the temperature parameter.
If the context window is noisy (as in, high-entropy) enough, any kind of "signal" (coherent text) can emerge.
Also, you know, infinite monkeys.
You're projecting. Sorry.
Ed Zitron is one of the loudest opponents against the AI industry right now, and he continues to insist that "there is no real AI adoption." The real problem, apparently, is that investors are getting duped. I would invite Zitron, and anyone else who holds the opinion that demand for AI is largely fictional, to open the app store on their phone on any day of the week and look at the top free apps charts. You could also check with any teacher, student, or software developer.
ChatGPT has some very impressive usage numbers, but the image tells on itself by being a free app. The conversion rate (percentage of people who start paying) is absolutely piss poor, with the very same Ed Zitron estimating it being at ~3% with 500.000.000 users. That also doesn't bode well with the fact that OpenAI still loses money even on their $200/month subscribers. People use ChatGPT because it's been spammed down their throats by the media that never question the sacred words of the executives (snake oil salesmen) that utter lunatic phrases like "AGI by 2025" (Such a quote exists somewhere, but I don't remember if this year was used). People also use ChatGPT because it's free and it's hard to say no to get someone to do your homework for you for free.
I don't need chatGPT etc for work, but I've used it a few times. It is indeed a very useful product. But most of the time I can get by without it and I kinda try to avoid using it for environmental reasons. We're boiling the oceans fast enough as it is.
people currently don't pay for it, because currently it's free. most people aren't using it for anything that requires a subscription.
Idk that the average GPT user knows or cares about AGI. I think the appeal is getting information specifically tailored to you. Sure, I can go online and search for something. Try and find what I'm looking for, or close to it. Or I can ask AI, and it'll give me text tailored exactly to my prompt. For instance, having to hope you can find someone with a problm similar to yours online, with a solution, vs. ChatGPT just tells you about your case specifically
In house at my work, we've found ChatGPT to be fairly useless, too. Where Claude and Gemini seem to reign supreme.
It seems like ChatGPT is the household name, but hardly the best performing.
Exactly, the users/installation count of such products are clearly a much more accurate indicator of the success of their marketing team, rather than their user's perceived value in such products lol
I wouldn't really trust Ed Zitron's math analysis when he gets a very simple thing like "there is no real AI adoption" plainly wrong. The financials of OpenAI and other AI-heavy companies are murky, but most tech startups run at a loss for a long time before they either turn a profit or get acquired. It took Uber over a decade to stop losing money every quarter.
OpenAI keeps getting more funding capital because (A) venture capital guys are pretty dumb, and (B) they can easily ramp up advertisements once the free money runs out. Microsoft has already experimented with ads and sponsored products in chatbot messages, ChatGPT will probably do something like that.
Someone on bluesky reposted this image from user @yeetkunedo that I find describes (one aspect of) my disdain for AI.
Text reads: Generative Al is being marketed as a tool designed to reduce or eliminate the need for developed, cognitive skillsets. It uses the work of others to simulate human output, except that it lacks grasp of nuance, contains grievous errors, and ultimately serves the goal of human beings being neurologically weaker due to the promise of the machine being better equipped than the humans using it would ever exert the effort to be. The people that use generative Al for art have no interest in being an artist; they simply want product to consume and forget about when the next piece of product goes by their eyes. The people that use generative Al to make music have no interest in being a musician; they simply want a machine to make them something to listen to until they get bored and want the machine to make some other disposable slop for them to pass the time with.
The people that use generative Al to write things for them have no interest in writing. The people that use generative Al to find factoids have no interest in actual facts. The people that use generative Al to socialize have no interest in actual socialization.
In every case, they've handed over the cognitive load of developing a necessary, creative human skillset to a machine that promises to ease the sweat equity cost of struggle. Using generative Al is like asking a machine to lift weights on your behalf and then calling yourself a bodybuilder when it's done with the reps. You build nothing in terms of muscle, you are not stronger, you are not faster, you are not in better shape. You're just deluding yourself while experiencing a slow decline due to self-inflicted atrophy.
Damn that hits the nail on the head. Especially that analogy of watching a robot lift weights on your behalf then claiming gains. It's causing brain atrophy.
But that is what CEO’s want. They want to pay for a near super human to do all of the different skill sets ( hiring, firing, finance, entry level engineering, IT tickets, etc) and it looks like it is starting to work. Seems like solid engineering students graduating recently have all been struggling to land decent starting jobs. I’ll grant it’s not as simple as this explanation, but I really think the wealth class are going to be happy riding this flaming ship right down into the depths.
The people that use generative Al for art have no interest in being an artist; they simply want product to consume and forget about when the next piece of product goes by their eyes. The people that use generative Al to make music have no interest in being a musician; they simply want a machine to make them something to listen to until they get bored and want the machine to make some other disposable slop for them to pass the time with.
Good sentiment, but my critique on this message is that the people who produce this stuff don't have really have any interest in producing what they do for its own sake. They only have interest in producing content to crowd out the people who actually care, and to produce a worse version of whatever it is in a much faster time than it would for someone with actual talent to do so. And the reason they're producing anything is for profit. Gunk up the search results with no-effort crap to get ad revenue. It is no different than "SEO."
Example: if you go onto YouTube right now and try to find any modern 30-60m long video that's like "chill beats" or "1994 cyberpunk wave" or whatever other bullshit they pump out (once you start finding it you'll find no shortage of it), you'll notice that all of those uploaders only began as of about a year ago at most and produce a lot of videos (which youtube will happily prioritize to serve you) of identical sounding "music." The people producing this don't care about anything except making money. They're happy to take stolen or plagiarized work that originated with humans, throw it into the AI slot machine, and produce something which somehow is no longer considered stolen or plagiarized. And the really egregious ones will link you to their Patreons.
The story is the same with art, music, books, code, and anything else that actually requires creativity, intuition, and understanding.
I believe the OP was referring more to consumers of ai in the statement, as opposed to people trying to sell content or whatever, which would be more in line with what you’re saying. I agree with both perspectives and I think the Op i quoted probably would as well. I just thought it was a good description of some of the why ai sucks, but certainly nit all of it.
Everyone who uses AI is slowly committing suicide, check ✅
Cognitive suicide.
The people who commission artists have no interest in being an artist; they simply want the product. Are people who commission artists also "slowly committing suicide?"
Well, philosophical and epistemological suicide for now, but snowball it for a couple of decades and we may just reach the practical side, too...
Edit: or, hell, maybe not even decades given the increase in energy consumption with every iteration...
the analogies used and the claims made are so dumb, they make me think that this is written by ai 🤣
We hate it because it's not what the marketing says it is. It's a product that the rich are selling to remove the masses from the labor force, only to benefit the rich. It literally has no other productive use for society aside from this one thing.
I would even hate it if it was exactly how it is marketed. Because what it is often marketed for is really stupid and often vague. The fact that it doesn‘t even remotely work like they say just makes me take it a lot less seriously.
@justanotherperson @corbin and it will inevitably turn into enshittified disaster when they start selling everyone's data (which is inevitable).
The „companion“ agents children in the 2020s and onward are growing up with and trust more than their parents will start advertising them pharmaceuticals when they‘re grown up :)
You hate it because the media which is owned by the rich told you to hate it so that they can horde it themselves while you champion laws to prevent lower class from using and embracing it. AI haters are class traitors
I don't hate AI, and I think broadly hating AI is pretty dumb. It's a tool that can be used for beneficial things when used responsibly. It can also be used stupidly and for bad things. It's the person using it who is the decider.
The problem is that there's basically no way to use it responsibly.
It helped me rewrite a program with different criteria, and it was much faster. I also read everything it wrote and told it what corrections to make. It is good for speed. It also taught me a coding trick or two. It is definitely not reliable, but can help a bit.
I think there is. Letting the actual professionals guide, instead of the money people is a big step.
Something like McDonnell, and later Boeing, basing all decisions on economic short gains, instead of engineering criteria.
Bean counters shouldn't make decisions.
I've definitely been pretty anti-AI, finding it kinda stupid and generally useless...
...but we hired an AI researcher at my work (which I laughed at). But I cannot deny anymore that with the proper setups, configs, rules, blend of onsite / cloud resources etc. - workplace AI can be pretty fucking game changing. To the point where I went from campaigning against the changes because I felt they were a waste of time to where I am worried for my future job and am using agents 5-10 times a day to handle small bugfixes for me.
I don't know what will happen when the bubble pops though.
I don't hate AI. AI didn't do anything. The people who use it wrong are the ones I hate. You don't sue the knife that stabbed you in court, it was the human behind it that was the problem.
The thing they created hates you. Trust me, it does.
Why do you say that? I'm not disagreeing. Even if you're just being rhetorical/trolling, where's that coming from? Because...actually yeah, I do get that impression sometimes and it's weird as hell.
But it's when you promote the knife like it's medicine rather than a weapon is when the shit turns sideways.
While true to a degree, I think the fact is that AI is just much more complex than a knife, and clearly has perverse incentives, which cause people to use it "wrong" more often than not.
Sure, you can use a knife to cook just as you can use a knife to kill, but just as society encourages cooking and legally & morally discourages murder, then in the inverse, society encourages any shortcut that can get you to an end goal for the sake of profit, while not caring about personal growth, or the overall state of the world if everyone takes that same shortcut, and the AI technology is designed with the intent to be a shortcut rather than just a tool.
The reason people use AI in so many damaging ways is not just because it is possible for the tool to be used that way, and some people don't care about others, it's that the tool is made with the intention of offloading your cognitive burden, doing things for you, and creating what can be used as a final product.
It's like if generative AI models for image generation could only fill in colors on line art, nothing more. The scope of the harm they could cause is very limited, because you'd always require line art of the final product, which would require human labor, and thus prevent a lot of slop content from people not even willing to do that, and it would be tailored as an assistance tool for artists, rather than an entire creation tool for anyone.
Contrast that with GenAI models that can generate entire images, or even videos, and they come with the explicit premise and design of creating the final content, with all line art, colors, shading, etc, with just a prompt. This directly encourages slop content, because to have it only do something like coloring in lines will require a much more complex setup to prevent it from simply creating the end product all at once on its own.
We can even see how the cultural shifts around AI happened in line with how UX changed for AI tools. The original design for OpenAI's models was on "OpenAI Playground," where you'd have this large box with a bunch of sliders you could tweak, and the model would just continue the previous sentence you typed if you didn't word it like a conversation. It was designed to look like a tool, a research demo, and a mindless machine.
Then, they released ChatGPT, and made it look more like a chat, and almost immediately, people began to humanize it, treating it as its own entity, a sort of semi-conscious figure, because it was "chatting" with them in an interface similar to how they might text with a friend.
And now, ChatGPT's homepage is presented as just a simple search box, and lo and behold, suddenly the marketing has shifted to using ChatGPT not as a companion, but as a research tool (e.g. "deep research") and people have begun treating it more like a source of truth rather than just a thing talking to them.
And even in models where there is extreme complexity to how you could manipulate them, and the many use cases they could be used for, interfaces are made as sleek and minimalistic as possible, to hide away any ability you might have to influence the result with real, human creativity.
The tools might not be "evil" on their own, but when interfaces are designed the way they are, marketing speak is used how it is, and the profit motive incentivizes using them in the laziest way possible, bad outcomes are not just a side effect, they are a result by design.
This is fantastic description of Dark Patterns. Basically all the major AI products people use today are rife with them, but in insidiously subtle ways. Your point about minimal UX is a great example. Just because the interface is minimal does not mean it should be, and OpenAI ditched their slider-driven interface even though it gave the user far more control over the product.
I don't hate AI. I'm just waiting for it. Its not like this shit we have now is intelligent.
Yeah I hate that is is used for llm, when we tell ia I see Jarvis from iron man not a text generator.
The term "AI" was established in 1956 at the Dartmouth workshop and covers a very broad range of topics in computer science. It definitely encompasses large language models.
I’ve recently taken to considering Large Language Models like essay assistants. Sure, people will try and use it to replace the essay entirely, but in its useful and practical form, it’s good at correcting typos, organizing scattered thoughts, etc. Just like an English teacher reviewing an essay. They don’t necessarily know about the topic you’re writing about, but they can make sure it’s coherent.
I’m far more excited for a future with things like Large Code or Math or Database models that are geared towards very particular tasks and the different models can rely on each other for the processes they need to take.
I’m not sure what this will look like, but I expect a tremendous amount of carefully coordinated (not vibe-coded) frameworks would need to be made to support this kind of communication efficiently.
Leave my boy Wheatley out of this
I think ego is an underestimated source for a lot of the anti-AI rage. It's like a sort of culture-wide narcissism, IMO. We've spent millennia patting ourselves on the back about how special and unique human creativity is, and now a commodity graphics card can come up with better ideas than most people.
I am occasionally getting hired by vibe coders, to fix their AI's mess. It's not ego. AI is just not smart enough to replace my job, and many others.
My anti-AI rage is caused by the marketing, trying to convince people and investors that AI can do the work of humans with lower cost. Many companies, especially those developing software, fired a large percentage of the work force, and then they were trying to hire them back to fix the AI's shit.
Another reason for my hate is its energy needs. There was another post, that was talking about an estimation on how much energy GPT-5 needs. It's thought that it needs the power of 2 nuclear reactors. This much energy to barely be able to do any job.
Creativity, intuition, "big picture" thinking, global context thinking, empathy and subtle understanding, like teachers understanding a child's context and adapting the pedagogical approach, or translators grasping concepts, nuances, feeling, will not be replaced soon.
Remember, these are statistical models, nowhere near intelligence. A huge part of intelligence is understanding and decision making with very little data. That inference processing is very far away.
This reminds me of a robot character called SARA that I would see on a Brazilian family series As Aventuras De Poliana. :-)
Remember when Boomers complained about the internet. Now we have millennials complaining about AI.
It's extremely wasteful. Inefficient to the extreme on both electricity and water. It's being used by capitalists like a scythe. Reaping millions of jobs with no support or backup plan for its victims. Just a fuck you and a quip about bootstraps.
It's cheapening all creative endeavors. Why pay a skilled artist when your shitbot can excrete some slop?
What's not to hate?
It was also inefficient for a computer to play chess in 1980. Imagine using a hundred watts of energy and a machine that costed thousands of dollars and not being able to beat an average club player.
Now a phone will cream the world's best in chess and even go
Give it twenty years to become good. It will certainly do more stuff with smaller more efficient models as it improves
Twenty years is a very long time, also "good" is relative. I give it about 2-3 years until we can run a model as powerful as Opus 4.1 on a laptop.
Not the same. The underlying tech of llm's has mqssively diminishing returns. You can akready see it, could see it a year ago if you looked. Both in computibg power and required data, and we do jot have enough data, literally have nit created in all of history.
This is not "ai", it's a profoubsly wasteful capitalist party trick.
Please get off the slop and re-build your brain.
Show me the chess machine that caused rolling brown outs and polluted the air and water of a whole city.
I'll wait.
It seems like you are implying that models will follow Moore's law, but as someone working on "agents" I don't see that happening. There is a limitation with how much can be encoded and still produce things that look like coherent responses. Where we would get reliable exponential amounts of training data is another issue. We may get "ai" but it isn't going to be based on llms