Lemmy be like
Lemmy be like
Lemmy be like
Whether intentional or not, this is gaslighting. "Here's the trendy reaction those wacky lemmings are currently upvoting!"
Getting to the core issue, of course we're sick of AI, and have a negative opinion of it! It's being forced into every product, whether it makes sense or not. It's literally taking developer jobs, then doing worse. It's burning fossil fuels and VC money and then hallucinating nonsense, but still it's being jammed down our throats when the vast majority of us see no use-case or benefit from it. But feel free to roll your eyes at those acknowledging the truth...
it's literally making its users nuts, or exacerbating their existing mental illness. not hyperbole, according to psychologists. and this isn't conjecture:
https://futurism.com/openai-investor-chatgpt-mental-health
https://futurism.com/chatgpt-psychosis-antichrist-aliens
What is the gaslighting here? A trend, or the act of pointing out a trend, do not seem like gaslighting to me. At most it seems like bandwagon propaganda or the satire thereof.
For the second paragraph, I agree we (Lemmings) are all pretty against it and we can be echo-chambery about it. You know, like Linux!
But I would also DISagree that we (population of earth) are all against it.
It seems like the most immature and toxic thing to me to invoke terms like "gaslighting," ironically "toxic," and all the other terms you associate with these folks, defensively and for any reason, whether it aligns with what the word actually means or not. Like a magic phrase that instantly makes the person you use it against evil, manipulative and abusive, and the person that uses it a moral saint and vulnerable victim. While indirectly muting all those who have genuine uses for the terms. Or i'm just going mad exaggerating, and it's just the typical over- and mis-using of words.
Anyhow, sadly necessary disclaimer, i agree with almost all of the current criticism raised against AI, and my disagreements are purely against mischaracterizations of the underlying technology.
EDIT: I just reminded myself of when a teacher went ballistic at class for misusing the term "antisocial," saying we're eroding and polluting all genuine and very serious uses of the term. Hm, yeah it's probably just that same old thing. Not wrong for going ballistic over it, though.
Are you honestly claiming a shitpost is gaslighting?
What a world we live in.
It's just a joke bro.
Wouldn't the opposite of artificial intelligence be natural stupidity?
Much love
The currently hot LLM technology is very interesting and I believe it has legitimate use cases. If we develop them into tools that help assist work. (For example, I'm very intrigued by the stuff that's happening in the accessibility field.)
I mostly have problem with the AI business. Ludicruous use cases (shoving AI into places where it has no business in). Sheer arrogance about the sociopolitics in general. Environmental impact. LLMs aren't good enough for "real" work, but snake oil salesmen keep saying they can do that, and uncritical people keep falling for it.
And of course, the social impact was just not what we were ready for. "Move fast and break things" may be a good mantra for developing tech, but not for releasing stuff that has vast social impact.
I believe the AI business and the tech hype cycle is ultimately harming the field. Usually, AI technologies just got gradually developed and integrated to software where they served purpose. Now, it's marred with controversy for decades to come.
If we develop them into tools that help assist work.
Spoilers: We will not
I believe the AI business and the tech hype cycle is ultimately harming the field.
I think this is just an American way of doing business. And it's awful, but at the end of the day people will adopt technology if it makes them greater profit (or at least screws over the correct group of people).
But where the Americanized AI seems to suffer most is in their marketing fully eclipsing their R&D. People seem to have forgotten how DeepSeek spiked the football on OpenAI less than a year ago by making some marginal optimizations to their algorithm.
The field isn't suffering from the hype cycle nearly so much as it suffers from malinvestment. Huge efforts to make the platform marketable. Huge efforts to shoehorn clumsy chat bots into every nook and cranny of the OS interface. Vanishingly little effort to optimize material consumption or effectively process data or to segregate AI content from the human data it needs to improve.
Spoilers: We will not
Generative inpainting/fill is enormously helpful in media production.
Wait to power tripper db0 sees this. Crying that their ai photos in all their coms are cringe
The reason most web forum posters hate AI is because AI is ruining web forums by polluting it with inauthentic garbage. Don't be treating it like it's some sort of irrational bandwagon.
For those who know
I need to watch that video. I saw the first post but haven’t caught up yet.
it's just slacktivism no different than all the other facebook profile picture campaigns.
Now see, I like the idea of AI.
What I don't like are the implications, and the current reality of AI.
I see businesses embracing AI without fully understanding the limits. Stopping the hiring juniors developers, often firing large numbers of seniors because they think AI, a group of cheap post grad vibe programmers and a handful of seasoned seniors will equal the workforce they got rid of when AI, while very good is not ready to sustain this. It is destroying the career progression for the industry and even if/when they realise it was a mistake, it might already have devastated the industry by then.
I see the large tech companies tearing through the web illegally sucking up anything they can access to pull into their ever more costly models with zero regard to the effects on the economy, the cost to the servers they are hitting, or the environment from the huge power draw creating these models requires.
It's a nice idea, but private business cannot be trusted to do this right, we're seeing how to do it wrong, live before our eyes.
And the whole AI industry is holding up the stock market, while AI has historically always ran the hype cycle and crashed into an AI winter. Stock markets do crash after billions pumped into a sector suddenly turn out to be not worth as much. Almost none of these AI companies run a profit and don't have any prospect of becoming profitable. It's when everybody starts yelling that this time it's different that things really become dangerous.
and don't have any prospect of becoming profitable
There's a real twist here in regards to OpenAI.
They have some kind of weird corporate structure where OpenAI is a non-profit and it owns a for-profit arm. But, the deal they have with Softbank is that they have to transition to a for-profit by the end of the year or they lose out on the $40 billion Softbank invested. If they don't manage to do that, Softbank can withhold something like $20B of the $40B which would be catastrophic for OpenAI. Transitioning to a For-Profit is not something that can realistically be done by the end of the year, even if everybody agreed on that transition, and key people don't agree on it.
The whole bubble is going to pop soon, IMO.
Yep, exactly.
They knew the housing/real estate bubble would pop, as it currently is...
... So, they made one final last gambit on AI as the final bubble that would magically become super intelligent and solve literally all problems.
This would never, and is not working, because the underlying tech of LLM has no real actual mechanism by which it would or could develop complex, critical, logical analysis / theoretization / metacognition that isn't just a schizophrenic mania episode.
LLMs are fancy, inefficient autocomplete algos.
Thats it.
They achieve a simulation of knowledge via consensus, not analytic review.
They can never be more intelligent than an average human with access to all the data they've ... mostly illegally stolen.
The entire bet was 'maybe superintelligence will somehow be an emergent property, just give 8t more data and compute power'.
And then they did that, and it didn't work.
It's a nice idea, but private business cannot be trusted to do this right, we're seeing how to do it wrong, live before our eyes.
You're right. It's the business model driving technological advancement in the 21st century that's flawed.
i see a silver lining.
i love IT but hate IT jobs, here's hoping techbros just fucking destroy themselves..
tbf now I think AI is just a tool... in 3 years it will be a really impactfull problem
I 100% agree with you
I have to disagree that it's even a nice idea. The "idea" behind AI appears to be wanting a machine that thinks or works for you with (at least) the intelligence of a human being and no will or desires of its own. At its root, this is the same drive behind chattel slavery, which leads to a pretty inescapable conundrum: either AI is illusory marketing BS or it's the rebirth of one of the worst atrocities history has ever seen. Personally, hard pass on either one.
You nailed it, IMO. However, I would like a real artificial sentience of some sort just to add to the beautiful variety of the universe. It does seem that many of my fellow humans just want chattle slaves though. Which is saddening.
The problem isn't AI. The problem is Capitalism.
The problem is always Capitalism.
AI, Climate Change, rising fascism, all our problems are because of capitalism.
Wrong.
The problem are humans, the same things that happen under capitalism can (and would) happen under any other system because humans are the ones who make these things happen or allow them to happen.
Problems would exist in any system, but not the same problems. Each system has its set of problems and challenges. Just look at history, problems change. Of course you can find analogies between problems, but their nature changes with our systems. Hunger, child mortality, pollution, having no free time, war, censorship, mass surveilence,... these are not constant through history. They happen more or less depending on the social systems in place, which vary constantly.
While you aren't wrong about human nature. I'd say you're wrong about systems. How would the same thing happen under an anarchist system? Or under an actual communist (not Marxist-Leninist) system? Which account for human nature and focus to use it against itself.
Can, would... and did. The list of environmental disasters in the Soviet is long and intense.
Rather, our problem is that we live in a world where the strongest will survive, and the strongest does not mean the smart... So alas we will always be in complete shit until we disappear.
The fittest survive. The problem is creating systems where the best fit are people who lack empathy and and a moral code.
A better solution would be selecting world leaders from the population at random.
Lots of AI is technologically interesting and has tons of potential, but this kind of chatbot and image/video generation stuff we got now is just dumb.
I firmly believe we won't get most of the interesting, "good" AI until after this current AI bubble bursts and goes down in flames. Once AI hardware is cheap interesting people will use it to make cool things. But right now, the big players in the space are drowning out anyone who might do real AI work that has potential, by throwing more and more hardware and money at LLMs and generative AI models because they don't understand the technology and see it as a way to get rich and powerful quickly.
AI is good and cheap now because businesses are funding it at a loss, so not sure what you mean here.
The problem is that it's cheap, so that anyone can make whatever they want and most people make low quality slop, hence why it's not "good" in your eyes.
Making a cheap or efficient AI doesn't help the end user in any way.
I don't know if the current AI phase is a bubble, but i agree with you that if it were a bubble and burst, it wouldn't somehow stop or end AI, but cause a new wave of innovation instead.
I've seen many AI opponents imply otherwise. When the dotcom bubble burst, the internet didn't exactly die.
I find it very funny how just a mere mention of the two letters A and I will cause some people to seethe and fume, and go on rants about how much they hate AI, like a conservative upon seeing the word "pronouns."
One of these topics is about class consciousness, those other is about human rights.
An AI is not a person.
Someone with they/them pronouns is a person.
They have no business being compared to one another!
It's a comparison of people, not of subjects. In becoming blind with rage upon seeing the letters A and I you act the same as a conservative person seeing the word "pronouns."
Distributed platform owned by no one founded by people who support individual control of data and content access
Majority of users are proponents of owning what one makes and supporting those who create art and entertainment
AI industry shits on above comments by harvesting private data and creative work without consent or compensation, along with being a money, energy, and attention tar pit
Buddy, do you know what you're here for?
EDIT: removed bot accusation, forgot to check user history
Or are you yet another bot lost in the shuffle?
Yes, good job, anybody with opinions you don't like is a bot.
It's not like this was even a pro-AI post rather than just pointing out that even the most facile "ai bad, applause please" stuff will get massively upvoted
Yes, good job, anybody with opinions you don't like is a bot.
I fucking knew it!
Yeah, I guess that was a bit too far, posted before I checked the user history or really gave it time to sit in my head.
Still, this kind of meme is usually used to imply that the comment is just a trend rather than a legitimate statement.
Maybe there's some truth to it then. Have you considered that possibility?
Its true. We can have a nuanced view. Im just so fucking sick of the paid off media hyping this shit, and normies thinking its the best thing ever when they know NOTHING about it. And the absolute blind trust and corpo worship make me physically ill.
How people dare not like the automatic bullshit machine pushed down their troat...
Seriously, genrative AI acomplishment are :
One could have said many of the same thigs about a lot of new technologies.
The Internet, Nuclear, Rockets, Airplanes etc.
Any new disruptive technology comes with drawbacks and can be used for evil.
But that doesn't mean it's all bad, or that it doesn't have its uses.
Give me one real world use that is worth the downside.
As dev I can already tell you it's not coding or around code. Project get spamed with low quality nonsensical bug repport, ai generated code rarely work and doesn't integrate well ( on top on pushing all the work on the reviewer wich is already the hardest part of coding ) and ai written documentation is ridled with errors and is not legible.
And even if ai was remotly good at something it still the equivalent of a microwave trying to replace the entire restaurant kitchen.
Of those, only the internet was turned loose on an unsuspecting public, and they had decades of the faucet slowly being opened, to prepare.
Can you imagine if after WW2, Werner Von Braun came to the USA and then just like... Gave every man woman and child a rocket, with no training? Good and evil wouldn't even come into, it'd be chaos and destruction.
Imagine if every household got a nuclear reactor to power it, but none of the people in the household got any training in how to care for it.
It's not a matter of good and evil, it's a matter of harm.
We should ban computers since they are making mass surveillance easier. /s
we should allow lead in paint its easier to use /s
You are deliberatly missing my point which is : gen AI as an enormous amount of downside and no real world use.
Not all AI is bad. But there’s enough widespread AI that’s helping cut jobs, spreading misinformation (or in some cases, actual propaganda), creating deepfakes, etc, that in many people’s eyes, it paints a bad picture of AI overall. I also don’t trust AI because it’s almost exclusively owned by far right billionaires.
Machines replacing people is not a bad thing if they can actually perform the same or better; the solution to unemployment would be Universal Basic Income.
Unfortunately, UBI is just a solution to unemployment. Another solution (and the one apparently preferred by the billionaire rulers of this planet) is letting the unemployed rot and die.
Yeah, that would be the solution, but it's never happening.
For labor people don't like doing, sure. I can't imagine replacing a friend of mine with a conversation machine that performs the same or better, though.
I personally think of AI as a tool, what matters is how you use it. I like to think of it like a hammer. You could use a hammer to build a house, or you could smash someone's skull in with it. But no one's putting the hammer in jail.
Seriously, the AI hate gets old fast. Like you said it's a tool, gey get over it people.
Yeah, except it's a tool that most people don't know how to use but everyone can use, leading to environmental harm, a rapid loss of media literacy, and a huge increase in wealth inequality due to turmoil in the job market.
So... It's not a good tool for the average layperson to be using.
Stop drinking the cool aid bro. Think of these statements critically for a second. Environmental harm? Sure. I hope you're a vegan as well.
Loss of media literacy: What does this even mean? People are doing things the easy way instead of the hard way? Yes, of course cutting corners is bad, but the problem is the conditions that lead to that person choosing to cut corners, the problem is the demand for maximum efficiency at any cost, for top numbers. AI is is making a problem evident, not causing it. If you're home on a Friday after your second shift of the day, fuck yeah you want to do things easy and fast. Literacy what? Just let me watch something funny.
Do you feel you've become more stupid? Do you think it's possible? Why wouild other people, who are just like you, be these puppets to be brain washed by the evil machine?
Ask yourself. How are people measuring intelligence? Creativity? How many people were in these studies and who funded them? If we had the measuring instrument needed to actually make categorizations like "People are losing intelligence." Psychologists wouldn't still be arguing over the exact definition of intelligence.
Stop thinking of AI as a boogieman inside people's heads. It is a machine. People using the machine to achieve a mundane goal, it doesn't mean the machine created the goal or is responsible for everything wrong with humanity.
Huge increase in inequality? What? Brother AI is a machine. It is the robber barons that are exploiting you and all of the working class to get obsenely rich. AI is the tool they're using. AI can't be held accountable. AI has no will. AI is a tool. It is people that are increasing inequality. It is the system held in place by these people that rewards exploitation and encourages to look at the evil machine instead. And don't even use it, the less you know, the better. If you never engage with AI technology, you'll believe everything I say about how evil it is.
“Guns don’t kill people, people kill people”
Edit:
Controversial reply, apparently, but this is literally part of the script to a Philosophy Tube video (relevant part is 8:40 - 20:10)
We sometimes think that technology is essentially neutral. It can have good or bad effects, and it might be really important who controls it. But a tool, many people like to think, is just a tool. "Guns don't kill people, people do." But some philosophers have argued that technology can have values built into it that we may not realise.
...
The philosopher Don Idhe says tech can open or close possibilities. It's not just about its function or who controls it. He says technology can provide a framework for action.
...
Martin Heidegger was a student of Husserl's, and he wrote about the ways that we experience the world when we use a piece of technology. His most famous example was a hammer. He said when you use one you don't even think about the hammer. You focus on the nail. The hammer almost disappears in your experience. And you just focus on the task that needs to be performed.
Another example might be a keyboard. Once you get proficient at typing, you almost stop experiencing the keyboard. Instead, your primary experience is just of the words that you're typing on the screen. It's only when it breaks or it doesn't do what we want it to do, that it really becomes visible as a piece of technology. The rest of the time it's just the medium through which we experience the world.
Heidegger talks about technology withdrawing from our attention. Others say that technology becomes transparent. We don't experience it. We experience the world through it. Heidegger says that technology comes with its own way of seeing.
...
Now some of you are looking at me like "Bull sh*t. A person using a hammer is just a person using a hammer!" But there might actually be some evidence from neurology to support this.
If you give a monkey a rake that it has to use to reach a piece of food, then the neurons in its brain that fire when there's a visual stimulus near its hand start firing when there's a stimulus near the end of the rake, too! The monkey's brain extends its sense of the monkey body to include the tool!
And now here's the final step. The philosopher Bruno Latour says that when this happens, when the technology becomes transparent enough to get incorporated into our sense of self and our experience of the world, a new compound entity is formed.
A person using a hammer is actually a new subject with its own way of seeing - 'hammerman.' That's how technology provides a framework for action and being. Rake + monkey = rakemonkey. Makeup + girl is makeupgirl, and makeupgirl experiences the world differently, has a different kind of subjectivity because the tech lends us its way of seeing.
You think guns don't kill people, people do? Well, gun + man creates a new entity with new possibilities for experience and action - gunman!
So if we're onto something here with this idea that tech can withdraw from our attention and in so doing create new subjects with new ways of seeing, then it makes sense to ask when a new piece of technology comes along, what kind of people will this turn us into.
I thought that we were pretty solidly past the idea that anything is “just a tool” after seeing Twitler scramble Grok’s innards to advance his personal politics.
Like, if you still had any lingering belief that AI is “like a hammer”, that really should’ve extinguished it.
But I guess some people see that as an aberrant misuse of AI, and not an indication that all AI has an agenda baked into it, even if it’s more subtle.
My skull-crushing hammer that is made to crush skulls and nothing else doesn't crush skulls, people crush skulls
In fact, if more people had skull-crushing hammers in their homes, i'm sure that would lead to a reduction in the number of skull-crushings, the only thing that can stop a bad guy with a skull-crushing hammer, is a good guy with a skull-crushing hammer
Guns don’t kill people. People with guns kill people.
Ftfy
Bad faith comparison.
The reason we can argue for banning guns and not hammers is specifically because guns are meant to hurt people. That's literally their only use. Hammers have a variety of uses and hurting people is definitely not the primary one.
AI is a tool, not a weapon. This is kind of melodramatic.
We once had played this game with friends where you get a word stuck on your forehead and you have to guess what are you.
One guy got C4 (as in explosive) to guess and he failed. I remember that we had to agree with each other whether C4 is or is not a weapon. Main idea was that explosives are comparatively rarely used in actual killing opposed to other things like mining and such. Parallel idea was that is Knife a weapon?
But ultimately we agreed that C4 is not a weapon. It was invented not primarily to to kill or injure. Opposed to guns, that are only for killing or injuring.
Take guns away, people will kill with literally anything else. But give an easy access to guns, people will kill with them. Gun is not a tool, it is a weapon by design.
Yet gun control works.
Same idea.
A hammer doesn't consume exorbitant amounts of power and water.
What about self hosting? I can run a local GenAI on my gaming PC with relative ease. This isn't consuming mass amounts of power.
Do you think hammers grow out of the ground? Or that the magically spawn the building materials to work on?
Everything we do has a cost. We should definitely strive for efficiency and responsibile use of resources. But to use this as an excuse, while you read this in a device made of metals mined by children, is pretty hypocritical.
No consumption is ehical under capitalism, take responsibility instead for what you do with that consumption.
Neither does an algorithm.
Extreme oversimplification. Hammers don't kill the planet by simply existing.
And neither does AI? The massive data centers are having negative impacts on local economies, resources and the environment.
Just like a massive hammer factory, mines for the metals, logging for handles and manufacturing for all the chemicals, paints and varnishes have a negative environmental impact.
Saying something kills the planet by existing is an extreme hyperbole.
I'm a lot more sick of the word 'slop' than I am of AI. Please, when you criticize AI, form an original thought next time.
Yes! Will people stop with their sloppy criticisms?
AI is bad and people who use it should feel bad.
When people say this they are usually talking about a very specific sort of generative LLM using unsupervised learning.
AI is a very broad field with great potential, the improvements in cancer screening alone could save millions of lives over the coming decades. At the core it's just math, and the equations have been in use for almost as long as we've had computers. It's no more good or bad than calculus or trigonometry.
No hope commenting like this, just get ready getting downvoted with no reason. People use wrong terms and normalize it.
So cancer cell detection is now bad and those doing it should feel bad?
The world isn't black'n white.
Don't be obtuse, you walnut. I'm obviously not equating medical technology with 12-fingered anime girls and plagiarism.
Me to burn victims: "You know, without fire, we couldn't grill meat. Right? You should think more about what you say."
So is eating meat, flying, gaming, going om holiday, basically if you exist you should feel bad
How does one feel bar?
Would love an explanation on how I'm in the wrong on reducing my work week from 40 hours to 15 using AI.
Existing in predatory capitalistic system and putting the blame on those who utilize available tools to reduce the predatory nature of our system is insane.
Because when your employer catches on, they'll bring you back up to 40 anyway.
And probably because those 15 hours now produce shit quality.
You mean a subset of LLM that are trained on bad human behaviours
Lots of pro AI astroturfing and whataboutisms in these comments... 🤢
"Anyone I disagree with must be a bot or government agent"
Only because most of the people here don't have the faintest idea what AI is or how it works.
You'd also get a ton of upvotes for saying "Trump bad", but you wouldn't be wrong. Shit is just shit.
He's made the World wake up to the fact that they can't trust the US, so that can be seen as good?
AI isn't that black and white, just like any big technology it can be used for good or bad.
Just like Airplanes
It would still be a performative post though.
What we need is a circlejerk@ community
It Is true thou, ai bad
"B-But you don't understand, AI DESTROYS le epic self employed artists and their bottom line! Art is a sacred thing that we all do for fun and not for work, therefore AI that automates the process is EVIL!"
Actual thought process of some people
AI does do this to a subsection. Claiming that everyone is overreacting is just as stupid and lacks the same amount of nuance as claiming AI is going to ruin all self employed artists.
Also this ignores AI companies stealing blatnatly copyrighted material to feed their AI. As an artist I rather not have some randoms steal my stuff so some mid-tier corporation can generate their logos and advertisements without paying for it
Not claiming that everyone is overreacting, but how stupid a lot of anti-AI arguments are. Artists drawing art for a living gets painted not as a job, but as some sort of a fun recreational activity ignoring that artists have to do commissions or draw whatever's popular with their fan base that pays their bills via patreon, which in other words is the process of commodifying oneself aka work.
Also this ignores AI companies stealing blatnatly copyrighted material to feed their AI.
Not saying that you're necessarily one of those people, but this argument often pops up in leftist spaces who previously were anti-IP, which is a bit hypocritical. One moment people are against intellectual property, calling it abusable, restrictive, etc, but once small artists start getting attacked then the concept has to be defended.
As an artist
womp womp you'll have to get a real job now /s
Um, have you tried practicing? Just draw a stick figure or hire an artist, this will easily solve all of your problems. You're welcome.
I’m sure AI would be great if we actually had it. LLMs are not AI.
They factually are. ML is AI. I think you mean AGI maybe?
GTFO with your nuanced and engaged understanding. This is Lemmy.
AI > ML > DL > GenAi.
AI is a generic term for any LLM followed by Machine Learning, Deep Learning and Generative AI.
The hate against AI is hilariously misinformed.
Could be, I get confused by the alphabet soup of acronyms. I mean these glorified predictive text machines that for some reason marketers are trying to push as having some sort of ability to "think".
An AI could be demonstrably 30 times more accurate than a human in diagnosing a cancer on a scan Lemmy would still shit on it because it's an AI :D.
On Reddit I knew that the subject of gun control was not allowed to be talked about. Now I embraced Lemmy and I can't talk no matter what about AI. It's just a taboo subject. Apparently some people want to reject the tech entirely and think it will somehow just magically stay out of their lives. A very naive dream.
So yeah Lemmy. Refuse the conversation, look away, I'm sure it will be fine.
An AI could be demonstrably 30 times more accurate than a human in diagnosing a cancer on a scan Lemmy would still shit on it because it’s an IA
I think this is an exaggeration.
Legitimately useful applications, like in the medical field, are actually brought up as examples of the "right kind" of use case for this technology.
Over and over again.
It's kind annoying, because both the haters of commercial LLM in All The Things and defenders of the same will bring up these exact same use cases as examples of good ai use.
May I ask for a link ? Never saw that in the communities I consult. Never. Or at least not above 5 downvotes.
Think about your argument for a minute.
I know you think this will harm you and everyone you know, but it'll be much better if you just stay quiet instead of vocally opposing it
When has that ever been good advice?
So everything related to AI is negative ?
If so do you understand why we can't have any conversation on the subject ?
I'd welcome actual AI. What is peddled everyday as "AI" is just marketing bullshit. There's no intelligence in it. Language shapes perception and we should take those words back and use them according to their original and inherent meaning. LLMs are not AI. Stable diffusion is not AI. Neural networks trained for a singular task are not AI.
Define "intelligence"
https://en.m.wikipedia.org/wiki/Intelligence
Take your pick from anything that isn't recent and by computer scientists or mathematicians, to call stuff intelligent that clearly isn't. According to some modern marketing takes I developed AI 20 years ago (optimizing search problems for agentic systems); it's just that my peers and I weren't stupid enough to call the results intelligent.
There are valid reasons for disliking AI (rather, how it’s being used) and I’ll upvote when a relevant, informed argument is made against it. Otherwise I’ll mentally filter out the low-effort comments that just say “fuck AI” with dozens of upvotes.
+1
lemmycirclejerk ☺️
I was laughing today seeing the same users who have been calling AI a bullshit machine posting articles like "grok claims this happened". Very funny how quick people switch up when it aligns with them.
That would seem hypocritical if you're completely blind to poetic irony, yes.
it doesnt seem hypocritical. It is
Wouldn't posting articles about AI making up bullshit support their claim that AI makes up bullshit?
You would be right if they weren't posting the article using grok as the source for the main claim.
The articles were "grok claims it was suspended from x for accusing isreal of genocide" thats fine. It is hypocritical when you post that article to every news, politics and tech community. There were a few communities where people commented that grok is full of shit but way to many communities treated it as if it was solid evidence.
Reddit too
Reminder that Lemmy is typically older and older people are usually more conservative about things. Sure, politcially, Lemmy leans left, but technologically, Lemmy is very conservative.
Like for example, you see people on Lemmy say they'll switch to a dumbphone, but that's probably even more insecure, and they could've just used Lineage OS or something and it would be far more private.
Why does being progressive and into tech mean being into ai all of a sudden? It has never meant that, its the conservative mfs pushing ai for a reason. You think any sort of powerful ai is about to be open source and usable by the ppl? Not expensive af to run with hella regulations behind who can use it?
Im progressive and in to tech, I dont like fking generative ai tho its the worst part of tech to me, ai can be great in the medical field, It can be great as a supplementary tool but mfs dont use it that way. They just wanna sit on their asses and get rich off other ppls work
You think any sort of powerful ai is about to be open source and usable by the ppl?
There's a huge open source community running local models!
Yes, any further questions?
Why is the other arrow also pointing up?
Because I used AI slop to create this shitpost lol. So naturally it would make mistake.
There are other mistakes in the image too
Makes for a confusing cartoon. I browsed too many of the comments thinking everyone knew what 3251 means except me. I thought a route 3252 road sign fell on him.
You used AI to make a stickfigure comic? Damn.
Apple = bad is also an instant karma farm 😁
I prefer the fine vintage of a M$ = bad post, myself.
Or perhaps even a spicy little Ubuntu = bad post.
WTF is "AI"? You mean LLM?
Edit: lol, lmao even
Yeah. I hate the naming of it too. It's not AI in the sense how science fiction saw it. History repeats itself in the name of marketing. I'm still very annoyed with these marketers destroying the term "hover board".
AI includes a lot of things
The way ghosts in pacman chase you is AI
Either Al Capone or Weird Al.
Peak misunderstanding between AI and LLM
The LLM shills have made "AI" refer exclusively to LLMs. Honestly the best ad campaign ever.
The LLM shills have made "AI" refer exclusively to LLMs.
Yes, I agree and it's unacceptable for me. Now most people here are also falling in the same hole. I'm here not to promote/support/standing with LLM or Gen-AI, I want to correct what is wrong. You can hate something but please, be objective and rational.
Why the hell are you being downvoted? You are completely right.
People will look back at this and "hover boards" and will think "are they stupid!?"
Mislabeling a product isn't great marketing, it's false advertisement.
IDK LMAO, that's what I really hate about Reddit/Lemmy, the voting system. People downvote but don't tell where I'm wrong in their opinion. I mean, at least argue — say out loud your (supposedly harmless) opinion. I even added a disclaimer there that I don't promote LLM and such stuff. I don't really care either, I stand with correctness and do what I can to correct what is wrong. I totally agree with @sentient_loom@sh.itjust.works tho.
AI is an umbrella term that holds many thing. We have been referring to simple path finding algorithms in video games as AI for two decades, llms are AIs.
Not all that glitters is gold. 🤷
Remember when the internet was treated like AI when it first dropped? People up in arms about internets influence on young people/kids.
This all seems really familiar.
Honestly low-key technophobia has always been a major sentiment in otherwise tech-focused parts of the internet and it has always fascinated me.
The new thing about this new technology is that it’s taking our jobs away without creating any new ones. I fear the day some stupid higher up decides that a chatbot can do a better job than me.
Nobody is going to libraries anymore, The internet is killing books and jobs 🤬
What kind of selfish, emotionless psychopath do you have to be to legitimately think that libraries being unused, forgotten, and closed is a good thing?
You ever thought about this: maybe if you visited your library in person more often, you'd actually have more friends.
I mean they were right about the internet. The corporations turned most of it into a cluster of competing megamalls that make people anxious while selling them things.
Never mind that even just open a port 443 anywhere in the world will open you up to roving bands of OC scrapers, which feed the next evolution of corp power.
It's not like a bunch of basement nerds made LLMs and now corps are trying to muscle in (like with the internet). This new technology is theirs, from its R&D to now its aggressive rollout across all sectors of the Internet they control.
I don't have much hope for the commercial AI or LLM services that are available to most consumers. I think they're going to enshittify so bad that we're going to need a new word for it.
Arguably, they may have been right given the decade or so.
AI is exactly as bad as mechanised weaving looms.
i’m pro-AI (with huuuuge caveats) but i disagree with this… AI reduces certain jobs in a similar way, but it also enables large scale manipulation and fucks with our thought processes on a large scale
i’d say it’s like if a mechanised weaving loom also invented the concept of disinformation and propaganda
.. but also, mechanised weaving loom effected a single industry: modern ML has the potential to effect the large majority of people: it’s on a different scale than the disruption of the textile industry
Agree it's on a different scale (everything is relative to 200 years ago).
One of the main "benefits" of mechanised factory machinery in the early 1800s was that shifted the demand side of labour, such that capitalists had far more control over it. I reckon that counts as a kind of large scale manipulation (but yeah, probably not as pervasive of other domains of life).
Where are you seeing more than 200 upvotes on any post?
Does this count? https://sopuli.xyz/post/1138547
ah, mid 2023, the honeymoon times
The front page?
Literally this post lol
Doh. I'm always sorted to new, so things don't have this many votes. I should revisit every so often
It's a response to the overwhelming FOMO "ngmi" shilling on every other platform.
Back in my day, PAC-Man ghosts stayed perfectly still, exhibiting no behaviour at all and that’s how we Lemmy people liked it!
Pfft, as if a post on lemmy would ever get more than 1K upvotes.
True. Now shut up and take my upvote! Jo need for arguments; all has already been said.
Yes
It feels like the author of the post thinks superficially and doesn’t delve into the essence. Although, considering that killing children can also be funny, I don't even know what to laugh at and what to cry about. It's hard to understand someone else's humor sometimes.
But like... Good.
Ek is mal oor jou gebruikersnaam. Ek ook.
Dankie!! Jy is die eerste wat agter kom.
As jy nog nie weet van !southafrica@piefed.social weet nie gaan loer daar rond. Ek probeer `n bietjie van 'n gemeenskap daar bou.
Ek sal dit nagaan, dankie vir die voorstel! Ek het van Suid-Afrika af geïmmigreer toe ek 4 was, so ek kan dit redelik goed praat, maar my lees- en skryfbegrip is nie-bestaande, so ek gebruik Google Translate om te help lol.
No but see we need machibes to do all the art for us, and averaging machibes to tell us wjat is and isnt true!
I mean, it is objectively bad for life. Throwing away millions to billions of gallons of water all so you can get some dubious coding advice.