AI absolutely has the potential to enable great things that people want, but that’s completely outweighed by the way companies are developing them just for profit and for eliminating jobs
Capitalism can ruin anything, but that doesn’t make the things it ruins intrinsically bad
I used to be so excited about tech announcements. Like...I should be pumped for ai stuff. Now I immediately start thinking about how they are going to use the thing to turn a profit by screwing us over. Can they harvest data with it? Can they charge a subscription for that? I'm getting so jaded.
on the modern internet
when you search for knowledge or try to connect with others on social media or consume content as a form of escapism from this hellhole what do you get?
you get fed ads, sponsored posts, scams, algorithmically curated content to optimize your engagement, and now also AI drivel that's inaccurate and sometimes dangerous.
even off the internet you cannot escape it. there are cameras, microphones, and other sensors everywhere tracking your every movement to anticipate your thoughts and desires to try to sell you something or even manipulate the costs of what you're already planning to buy to optimize profit.
i feel that the modern technological landscape is slowly driving me, a once tech supporter and enthusiast, toward some sort of neo-luddism to get away from it all.
and what's it all boil down to?
money.
it's always about the money.
AI absolutely has the potential to enable great things that people want
The current implementation of very large data sets obtained through web scrapping, very aggressive marketing of these services such that the results pollute existing online data sets, and the refusal to tag AI generated content as such in order to make filtering it out virtually impossible is not going to enable great things.
This is just spam with the dial turned up to 11.
Capitalism can ruin anything
There's definitely an argument that privatization and profit motive have driven the enormous amounts of waste and digital filth being churned out at high speeds. But I might go farther and say there are very specific bad actors - people who are going above simply seeking to profiteer and genuinely have mega-maniacal delusions of AGI ruling the world - who are rushing this shit out in as clumsy and haphazard a manner as possible. We've also got people using AI as a culpability shield (Israel's got Lavendar, I'm sure the US and China and a few other big military superpowers have their equivalents).
This isn't just capitalism seeking ever-higher rents. Its delusional autocrats believing they can leverage sophisticated algorithms to dictate the life and death of billions of their peers.
These are ticking time bombs. They aren't well regulated or particularly well-understood. They aren't "intelligent", either. Just exceptionally complex. That complexity is both what makes them beguiling and dangerous.
I was gonna say, of all the things to be upset about AI, not enabling things that I want isn't one of them. I use it all the time and find it incredibly useful for boring tasks I don't care about doing myself. Just today I had to write some repetitive unit tests and it saved me a bunch of time writing syntax so I could focus on the logic. It sounds like OOP either hasn't used it much or doesn't have a use for it.
I think the problem is that for consumers, it most often being used for generating spammy ad revenue sites with plagiarized rewritten content. Or the ads themselves in the case of image generation.
I’m glad it’s working for writing unit tests, though it would seem that better build/debug systems could be designed to eliminate repetitive coding like that by now. But I quit web dev about 5 years ago, and don’t intend to go back.
that doesn’t make the things it ruins intrinsically bad
Hmmm tricky, see for example https://thenewinquiry.com/super-position/ where capitalism is very good at transforming everything and anything, including culture in this example, to preserve itself while making more money for the few. It might not indeed change good things to bad once they already exist, but it can gradually change good things to new bad things while attempting to make them look like the good old ones it replaces.
"When I was young, they told me that AI would do the menial labor so that we could spend more time doing the things we love, like making music, painting, and writing poetry. Today, the AI makes music, paints pictures, and writes poetry so that I can work longer hours at my menial labor job."
AI bros are like pro-lifers, straw-manning an argument nobody is making.
The world is a wildly different place now, and the people developing them were headed by people motivated by reasons other than extracting as much money out of the world at any cost.
This is not nearly as comparable.
Beyond that, very few people had an issue with AI as fuzzy logic and machine learning. Those techniques were already in wide use all over the place to great success.
The term has been co-opted by the generative, largely LLM folks to oversell the product they are offering as having some form of intelligence. They then pivot to marketing it as a solution to the problem of having to pay people to talk, write, or create visual or audio media.
Generally, people aren't against using AI to simulate countless permutations of motorcycle frame designs to help discover the most optimal one. They're against wholesale reduction in soft skill and art/content creation jobs by replacing people with tools that are definitively not fit to task.
Pushback against non-generative AI, such as self-driving cars, is general fatigue at being sold something not fit to task and being told that calling it out is being against a hypothetical future.
Yeah and people also said the same thing about NFTs and now they barely exist. If there was a use for AI outside of very specific things I'd agree with you. But the uses for AI are very basic when comparing it to the Internet.
Friend Computer, I just want you to know that I actually love my 24/7/365 integrated surveillance state. The internet is an unmitigated good and anyone who says otherwise should be flagged as such and disposed of.
Absolutely! I hate cool and useful shit like OCR (optical character recognition for anyone who doesn't know) getting mixed with content theft generative AI
AI tools aren’t inherently unethical, and even the ones that use models with data provenance concerns (e.g., a tool that uses Stable Diffusion models) aren’t any less ethical than many other things that we never think twice about. They certainly aren’t any less ethical than tools that use Google services (Google Analytics, Firebase, etc).
There are ethical concerns with many AI tools and with the creation of AI models. More importantly, there are ethical concerns with certain uses of AI tools. For example, I think that it is unethical for a company to reduce the number of artists they hire / commission because of AI. It’s unethical to create nonconsensual deepfakes, whether for pornography, propaganda, or fraud.
That said, while AI does have energy
a lot of the comments I’ve read about AI’s energy usage are flat out wrong.
Great things
Depends on whom you ask, but “Great” is such a subjective adjective here that it doesn’t make sense to consider it one way or the other.
things that people want
Obviously people want the things that AI tools create. If they didn’t, they wouldn’t use them.
well-meaning
Excuse me, Sam Altman is a stand-up guy and I will not have you besmirching his name /s
Honestly my main complaint with this line is the implication that the people behind non-AI tools are any more well-meaning. I’m sure some are, but I can say the same with regard to AI. And in either case, the engineers and testers and project managers and everyone actually implementing the technology and trying to earn a paycheck? They’re well-meaning, for the most part.
On the environmental question, AI tools are energy agnostic. If humans using electricity can be environmentally sustainable at all, so can AI. I suspect the energy requirements are going to drop drastically as more specialized hardware is developed.
There are already a lot of great things they can do in the category of generating content for enjoyment. Both art and text based. One of the most popular twitch streamers uses a language model. Games and interactive experiences can be much more realistic and responsive now. As far as I can tell, a lot of people are benefiting from the ability to ask a question and get an expert level answer on most topics that is correct at least half the time. And this is just the infancy of the technology.
As far as well meaning people, a lot of people working on the technology are researchers and computer scientists who legitimately believe in the potential for good of the technology. Of course, there are people who don’t care and only want profit also, but that’s true of basically every company. So you could probably accurately say it’s being worked on by any type of person in that spectrum, but you can’t say the opposite and deny that well meaning people are creating it.
but it allows for some people to type out one-liners and generate massive blobs of text at the same time that they could be doing their jobs. /codecraft