Skip Navigation

"AI" Is Not a Problem and Is a Good Thing, but Is Being Abused by the Actual Problem

“AI” can’t replace people in any way. The most it could possibly do is assist a person in performing a task because it is a tool which does not actually have intelligence or awareness. This may be a debate but I am firmly on the side of punting Wilson the volleyball and cautioning people not to anthropomorphize a program whose complexity is minuscule when directly compared to even what little we yet know of an actual human brain. What the programs actually do is use a number of algorithms to decode mathematically what the user meant by their request and encode a response based on referencing values in a way calculated to be most likely to be understood by the average person or combine a number of images according to the parameters it’s interpreted. It’s no more intelligent than a search engine. It is a search engine.

Writers can’t be replaced by “AI.” Have you, the person reading these words, ever read a story written by an “AI” program and were genuinely interested in reading more because of how engaging it was? Neither have I. The “AI” can combine input in such a way that it’s coherent, and it can even be unintentionally funny in the way it mashes things together without understanding them, but the unthinking unfeeling machine does not know and could never guess what a person would actually be interested in reading or want to watch be performed. Even the laziest bestseller or tv script composed of as many cliches as the author can think of until they hit the word limit to submit a manuscript contains more artistry as the author understands why cliches are appealing and is able to apply them in such a way as to entertain a casual audience not totally familiar with them to be tired of them yet. An “AI” program can also combine cliches but meaninglessly and without intention. It’s trash and no one actually likes it. Reading at least 1000 words in a row of any “AI” story makes a better argument against its capabilities than I ever could.

Plastic artists can’t be replaced. Painters, sculptors, digital artists, doodlers, etc. no matter their skill level can produce an image infinitely more interesting than anything an “AI” could possibly produce. When I read about plastic artists being intimidated by “AI” art or despairing about having a skill which is no longer relevant I truly can’t imagine why. To see an image briefly and walk or scroll past it one is probably as good as any other, but is this really what artists hope for from an audience? When you actually look at an image with the understanding that every component part of it was applied by an artist whose methodology and inspiration you may or may not know, a human mind can provide fertile content to react to while an “AI” can’t. Why did the AI compose the image in this way? Why did the AI include these elements? Why did the AI reference this historical style? What did the AI intend to say through this image? The only answer to these questions and to all other questions is that it scraped existing images and created one from that data mindlessly. Compare this to a child’s drawing of a cat. You can tell a lot about the child, the details they find the most significant about the cat, the attitude of the child about the cat, and perhaps even learn something about how a cat exists in the world from another perspective that enriches your own perspective. An artist constantly creating and developing their ability for a number of years packs all that experience and themselves into what they produce, so everything in what they produce has a depth of meaning whether they intend for it to or not. An artist who creates what they feel like without putting much deliberate thought into it is still manifesting a pure expression of themselves informed by their nature and experiences which is relevant to all other human natures and experiences viewing it. An “AI” can put something together that may look cool, but there is just nothing to it.

There are tons of other things that I could get into which “AI” could never competently replace due to its nature but I wanted to use art as an example for what I feel the actual concern about “AI” replacing people is. The problem of “AI” and all labor-saving technology in the last few centuries is not the technology itself but is actually Capitalism. Capitalism is when Capitalists aka investors utilize their Capital aka money as an investment with the sole purpose of yielding a return on their investment aka making their money work for them to make more money. When a Capitalist invests in an enterprise their only concern is that they will receive more money from investing in that enterprise than the money they invested. Capitalists care less about every other quality of an enterprise than that they make a return on their investment because that is literally the only reason they would ever invest in a project; otherwise it would be charitable donation and not capitalism. A business run by capitalists themselves or hoping to appeal to capitalists for investment is most concerned with maximizing profit through some combination of minimizing expenses and maximizing revenue.

“AI” can’t possibly replace a writer or a painter, but “AI” is replacing writers and painters in the market. This is not because “AI” can produce a better or similar quality product because it clearly can’t, but that it can produce an “acceptable” product at minimal labor cost which contributes to profit on the expense side. The reason “AI” products are “acceptable” is not because consumers are rational actors in the marketplace as the myth goes. It’s because of so many myriad factors of consumer behavior that I can’t even reliably list them all here because despite over a century of direct study and numerous ancillary fields of study, our understanding is limited despite a massive amount of evidence for numerous drivers of behavior.

When I was a teenager fresh off of taking my AP Art History classes and really getting into the world of art which had opened up to me I was really into going to art markets and talking to the artists about their work. I had many enjoyable conversations but one of them really stuck with me. He regretted his lot as a professional artist because it was clear that he loved art. There were two sides of his stall: one side was “Florida crap” and the other side was “Texas crap,” both of which he had a severe distaste for. He recounted to me that as a young man he painted billboards in the 60’s and getting into scene more seriously. At that time he could have conversations with his patrons about what they wanted and produce meaningful work which set him up with false expectations for how his career would end up heading. He had glimpses of the business side at that time and described watching Andy Warhol, who was and is notorious for criticizing and vastly benefiting from consumerism, work. He and a number of art students waited in anticipation before the very late Warhol stumbled into the studio too high to even stand up while his assistants literally held him and printed the art with the machine while he slurred words and gestured. Almost literally printing money. From that experience and many others he slowly started to come into the reality of art as a business. By the time I was speaking to him most of his customers’ only concern as he described it was whether his paintings would match their couches. He painted for local decorating sensibilities, which themselves were not expressions of individual decorators but local magazine-derived design aesthetics which he could expect his Florida-style or Texas-style paintings to match. This was from the perspective of one non-capitalist just trying to survive in a market but the same forces apply at the highest level. For this kind of consumer interacting with several levels of market-manipulated consumer taste an AI-genterated painting would probably be suitable to complete their room the way it looks on Instagram.

Quality of work is only one out of many factors involved in marketplace success, and its importance to profit is relative. Due to the quarterly nature of planning at the highest levels of economy, that which is most profitable now is by orders of magnitude more valuable than what may be profitable in 10 years. Low quality “AI” products could crap up a variety of fields as non-expert Capitalists and CEOs shift their money around in the hopes of minimizing expenses within the next few quarters. As far as the most well-known products backed by billions in investment and marketed so aggressively everyone has to deal with them in one way or another, “AI” contributions being abused to replace entire professionals could be a big problem. This and enshittification in general caused by the same market forces is part of the reason I’m getting back into FOSS and Open Source stuff made by only people who care about the project rather than the bottom line.

All that being said, “AI” can be a valuable tool to assist people in making higher quality work than they were capable of creating without it and creating work which is unprecedented. It’s interesting to go to museums and see the fairly mundane furniture of monarchs and nobility which I could now find a better quality version of from an antique store due to the well-established advancement of technology. Similarly, I have used “AI” to mitigate a lot of grunt work from my professional and artistic endeavors that would have slowed my progress to a halt due to the time and resources I used to have no choice but to exhaust which were often more than I had to devote. Even though “AI” gives bad info too often, it’s vastly mitigated the time I take to find quick answers to simple questions which I would have otherwise had to dig for. I’ve learned a lot about the specific things I want to know about spreadsheet formulas, GIMP tools, and programming syntax that I would have either had to grow a second lifetime of time and energy to take a class on and practice with else dig through piles of derisive input from individuals who are personally insulted I don’t know better but still don’t answer my specific question. As a writing companion, I’ve used it to brainstorm and explore a number of sci-fi hypotheticals that no one other than myself are interested in considering or discussing (YET) – giving me specific terms which I can then go on to research and get a real basis in. I’ve been generating image assets which I’ve been cutting up and re-contextualizing to create images meaningful to me which I haven’t ever seen anything like. It’s an interesting tool whose capabilities are only now being explored, but it’s just a tool. It can’t do what we can do but we can use it to do what we want to do.

26
26 comments
  • I have two criticisms of this view.

    The first is the distinction between "replacing humans" and "making humans more productive". I feel like there's a misunderstanding on why companies hire people. I don't hire 15 people to do one job because 15 is a magic number of people I have to hit. I hire 15 people because 14 people weren't keeping up and it was worth more to my business to hire another expensive human to get more work done. So if suddenly 5 people could do the work of 15, because people became 3x more efficient, I'd probably fire 10 people. I no longer need them, because these 5 get the job done. I made the humans more effective, but given that humans are a replacement for humans, I now don't need as many of those because I've replaced them with superhumans instead.

    If I'm lucky as a company I could possibly keep the same number of people and do 3x as much business overall, but this assumes all parts of my business, or at least the core part, increases at the same time. If my accounting department becomes 3x as efficient but I still have the same amount of work for them to do because accounting isn't the purpose of my business, then I'm probably going to let go some accountants because they're all sitting around idle most of the time.

    It used to be that a gang of 20 people would dig up a hold in the road, but now it's one dude with an excavator.

    The second thing is the assumption that AI art is being evaluated as art. We have this notion in our culture that artists all produce only the best novels and screenplays, and all art hangs in a gallery and people look at it and think about what the artist could have meant by this expression, etc. But that's virtually no one in the grand scheme of things. The fact that most people know the names of a handful of "the most famous artists of all time", and it's like 30 people on the whole earth and some of them are dead should mean something.

    Most writers write stuff like the text on an ad in a fishing magazine. Or fully internal corporate documents that are only seen by employees of that one company. Most visual artists draw icons for apps that never launch. Or the swoopy background for an article. Or did the book jacket for a book that sells 8 copies at a local tradeshow. If there's a commercial for chips, someone had to write it, someone had to direct it, someone had to storyboard it. And no one put it in a museum and pondered its expression of the human experience. Some people make their whole living on those terrible stock photographs of a diverse set of people all laughing and putting their hands into the middle to show they're a team.

    Even if every artist with a name that anyone knows is unaffected by this, that can still represent a massive loss of work for basically all creative professionals.

    You touched on some of these things but I think glossed over them too much. AI art may not replace "Art", but virtually no one makes money from "Art", and so it doesn't have to replace it for people to have no job left.

    • Thanks for giving me the chance to get radically philosophical, because I think we are coming at this thing from fundamentally different perspectives. I completely get addressing this practically considering the way things are in our world currently, but my argument here is more of a total system critique than a critique of one consequence of the system. The negative effect on workers of labor saving technology is not due to the nature of technology which makes things easier, but due to the nature of capitalism which punishes workers because technology makes things easier. To be perfectly clear, this is not due to malice or bad behavior, but people behaving rationally according to the system which they exist in. You are absolutely right that since labor is a huge line item on the P&L it is in the interest of the business to reduce that expense as much as possible in the interest of profit and the interest of the capitalist receiving a return on their investment. What I’m critiquing here is that this is so normalized that it can be seen as completely natural when it very much isn’t in an historical context.

      As a point of comparison, I’m going to compare your examples one by one to a rough equivalent of an agricultural village in 5000 BC, long before capitalism but when we as people were the same then as we are now in many ways with less of a system to deal with.

      In a modern accounting department, what does an accountant want? Do they want to do accounting for its own sake, do they want the investors of their business to make as much money as possible, or do they want to receive compensation for their labor which can afford them a decent quality of life? If accounting software advances and puts them out of a job, why would they be upset about that according to the answer to the question above? Are they really upset that they have fewer opportunities to do manual accounting?

      In our ancient village, the villagers need to keep track of the amount of grain they have on hand so that everyone can be adequately fed, they have grain on hand for weather changes, and they don’t waste their time and effort growing so much grain it goes to waste. One villager keeps track of organizing the status of the grain and since they can’t know everything they assign a number of villagers to keep track of specific trends. One day a villager invents a rudimentary writing system to keep track of the grain, eliminating the need for all but the record keeper to keep track of anything. Who is upset by this intrusive technology?

      In the nineteenth century a team of 20 people spend their days digging ditches for a living. Do they love to dig ditches, do they want the boss to make as much money as possible, or do they want to receive compensation for their labor which can afford them a decent quality of life? If the excavator pushes them out of a job, why would they be upset about that according to the answer to the question above? Are they really upset about not being able to do back-breaking manual labor anymore?

      Back in the ancient village, they need to dig a deep trench around the village to keep dangerous predators out at night. It’s hard work but necessary so the villagers take time away from their families and farms to contribute. One villager arrives with a contraption that will dig the entire trench themselves easily in half the time. Who is upset by this intrusive technology?

      In the advertising department of a golf franchise a writer spends their days wording text so that it is succinct, clear, and emotionally manipulative. Is it their aspiration as a writer to manipulate people into making purchases they don’t need, do they want the investors of their business to make as much money as possible, or do they want to receive compensation for their labor which can afford them a decent quality of life? If a text-generating algorithm designed to prey on peoples’ latent desires pushes them out of a job, why would they be upset about that according to the answer to the question above? Are they really upset about not being able to write ads which few will notice and some will be fooled by?

      In the ancient village, the evenings around the fire are the main event for socializing with neighbors. To pass the time some play instruments and sing, some swap rumors, and some come up with stories or re-tell established stories in their own way. One day a traveling group of bards come through and play music and tell stories far better than anything the villagers have seen before and are rewarded with hospitality by the village. After they leave, the music and stories of the village are much different and much more engaging with some of the old less interesting things being dropped or completely remade according to the new performance standards. Who is upset by the bards upending how they perform?

      Although you didn’t give a plastic art example, I did in my essay. My artist friend works for themselves but is obligated only to paint what sells so they spend their time producing paintings that conform to the established decorating taste making industry. Do they like to make paintings they find meaningless, do they want to make as much money as possible, or do they want to receive compensation for their labor which can afford them a decent quality of life? If AI can make 1000 “paintings” a second which will work just as well hanging in a layout designed to maximize likes on Instagram for an influencer’s business, why would they be upset according to the answer above? Are they really upset at not being able to make a living selling that which they consider crap?

      In the ancient village the people have been very successful and have a lot of time on their hands, so they decide to get into making monuments and sculptures to make the village look nicer. A team of people bang away at hunks of stone with hammers, knocking chunks of rock away at a time at great hardship to try to make something that looks like anything with most of their labor wasted when the material shatters. One day someone comes up with the chisel and is able to make a refined statue by themselves, eliminating the need for a whole team. Who is upset by the single artist making better work than the group of villagers could before?

      Commodifying the labor is not what justifies it as having worth. In my opinion it being quality is what justifies it having worth, but in a capitalist market having inherent worth is not as important as maximizing profit which can often be done by using the scale of production and cutting labor expenses to produce a heavily marketed and successful but inferior product. There is quality work outside of market influence being done, especially in the FOSS movement, but this is limited by the nature of capitalism because doing quality work apart from the business world is not something you can survive by doing. For every passion project, there are countless projects being done for business purposes incentivized by maximizing capitalist returns rather than ensuring the highest quality project. Because to survive people necessarily have to sell the majority of their time to Capitalists making compromises necessary to compete in a market, they have less time to work on what they want to work on and to ensure anything in general is as good as it can be. This is a problem with capitalism, not “AI.”

  • People are also waaay overestimating how close we are to the classical AI shown in media. They see ChatGPT and understand that it has problems, but also know we went from dumb phones to super fast smartphones really quickly, so apply the same logic to AI, when it's closer to the 'bird in the picture' xkcd comic. (Ironically that problem can now be solved by 'AI', but the point still stands). End users are bad at estimating the complexity of a given task and taking something like our current AI models to something like Cortana from Halo is a completely unknown amount of time away. Most likely decades if not centuries from now. The current approach to AI will most likely never work like that, because it has no true ability to learn and grow. At least not in the human sense.

  • AI can't replace a person yet*

    Stating that AI limitations today means those limitations will exist in the future, despite the accelerated growth of AI complexity & capabilities is plain wrong.

    History is full of examples just like this, from computers, to the internet, to automation....etc "Robots will never replace my job because my job is complicated", it's not a matter of if, but when. Would you rather be on the side of history that considered the impacts and tried to mitigate them, or the side that stuck their head in the sand?

    Also, on the point of invalid logic. "AI is not the problem, it's the abuse" is assuming AI exists in a void, which it doesn't. The same logic: Biological weapons aren't bad, it's how they are used is the problem. Misinformation isn't bad, it's how it's spread that's the problem. Guns aren't bad, it's the people shooting them that's the problem. ....etc for everything else in the world that is a real problem because humans use and abuse it.

    Current gen AI is a problem because it's a catalyst for abuse. Not because of nature of existing AI, you are right, but that's an argument detached from the reality of the situation.

    Note: General Super Intelligence is a problem purely by it's natural. The same goes with partial intelligence due to alignment issues which are currently paradoxical in nature. There are entire fields of study for this.


    I would suggest learning how current models function. They have a lot of limitations and they are nowhere near actual AI like movies and media suggest.

    Despite this you will find while learning this that the rate of advancement is such that the future dangers posed by AI are real, and must be considered. Ignorantly ignoring the writing on the wall doesn't do us any good.

    • I'll need to edit my post for clarity, because I'm not actually talking about AI which has never existed and doesn't exist now. The term "artificial intelligence" is misleading, since no intelligence has ever been artificially created. What I'm specifically referring to is our current human-language equipped database referencers which we mistakenly call "AI" and are known by the misleading name "AI." An actual AI is a completely different thing, and I share your concerns about it.

      While I agree with you in part, I disagree with your categorization of AI as similar to weapons or misinformation which are inherently destructive. Can biological weapons ever be used in a productive way? Can misinformation to promote non-reality based political positions which benefit their propagators at the expense of the communities they scapegoat ever be a good thing? "AI" (which is not AI) is more similar to a knife. Knives can be used to assault and harm, and they can also be used for a variety of constructive purposes including artistic pursuits. Whether they are used for mugging or whittling depends greatly on the system the person using the tool exists in and what that system has provided them and what they have to do to survive in their system. Although I think "AI" could still be abused if not for Capitalism, it is my argument that the only reason it's widely considered to be problematic is due to its existence in the context of Capitalism. My argument is that without Capitalism "AI" would not be widely seen as threat.

  • Thank you. I play D&D and our DM was super hyped about some holiday one shot scenaios they got from ChatGPT. They were on and on about how they would never have to think about plot planning ever again. We dint end up playing them but I read the 3 different scenaios and I just wasnt impressed and I didn't understand why. It was completely predictible in a weird way. After reading your post and thinking back on it the scenarios felt more plagerized predictable rather than trope predictible. Tropes while predicitble can be funny and charming. While a knock off reproduction is only enjoyable for its ability to miss the mark.

    Anyway, thank you for helping me figure out how to put into words why I was not on the "this is amazing" train. I thought I just didn't get it.

    • I should have specified "AI" as it exists now, which is not true AI. At the point our tools achieve an actual form of intelligence and are capable of innovating or finding solutions to problems themselves without referencing existing instructions, I'd be saying a much different thing. It's not impossible that we create an intelligence, and if quantum computing ever happens in any real way it's also possible we could create a greater intelligence than our own which would be capable of creating an intelligence more advanced than itself ad infinitum. At that point the ramifications may even be beyond our ability to comprehend, and we might wish it was just converting the universe to paperclips.

26 comments