So without capitalism, AI would not be obfuscating the sources of ideas, mischaracterizing the content of works, polluting communication channels with vapid slop, enticing emotionally-vulnerable people to self-destructive behavior, accelerating disinformation, enabling scams, profiling thought-crime, producing nonconsensual pornography…?
There’s no denying that capitalism is steering AI (and everything) in a dark direction, but AI is also just hazardous by its very nature. Moving beyond capitalism won’t automatically make humans more careful than we’ve ever been.
Another thing about AI slop is that it’s usually motivated by some sort of get rich quick thinking or plain old labor replacement. Both motivations disappear without capitalism.
You can't dismiss the legitimate harm enabled by these things by pointing to another thing that enables harm...
I think you could make reasonable points here, but you're not engaging in discussion if you just dismiss them. These are legitimately serious issues and it's worth taking them seriously especially if you actually believe the things you say and want other people to understand your point of view. I'm not going to lie, it's gross to basically just say "well people get sexually abused anyway so it's not a concern."
Capitalism enables a lot of terrible stuff, but the world doesn't immediately become sunshine and rainbows if it's gone. There's still a lot of work to be done after the fact
I think the point is that there's nothing hazardous inherent in its nature, and pointing to the problematic uses under capitalism isn't any more a description of 'its nature' than is pointing to an ass a description of a chair's nature.
AI is a tool, just like any other, and the harm caused by that tool is largely defined by how it's used and by who.
There's no doubt that LLM's and other generative models are disruptive, but suggesting that they are inherently harmful assumes that the things and systems they are disrupting aren't themselves harmful.
Most of what you're pointing to as harm caused by AI is far more attributable to the systems it exists in (including and especially capitalism) and not the models themselves. The only issue that I can see with AI inherently is its energy demand - but if we're looking at energy consumption broadly then we'd be forced to look at the energy consumption of capitalism and consumerism under capitalism, too.
I imagine the sentiment here would be wildly different if we were scrutinizing the energy demand of gaming on a modern GPU.
Sure, but Abigail wasn't really advocating against transhumanism or technology generally... The critique of that video is that technology isn't really the focus of the disagreement between transhuminism and anti-transhumanism, but rather the 'dressing' around a deeper phenomenological belief (for transhumanists it's the belief that technology will save us from the inequity and suffering created under capitalism, and for anti-transhumanists it's the belief that technology and progress will subvert the 'natural' order of things and we must reject it in favor of tradition). Both arguments distract from what is arguably the more pressing issue - namely that technology does nothing to correct the contradictions of capital and it may even work to accelerate its collapse.
I would really enjoy a discussion about how AI might shape our experience as humans - and how that might be good or bad depending - but instead we're stuck in this other conversation about how AI might save us from the toils of labor (despite centuries of technological progress having never brought us any closer to liberation) vs how it might be a Trojan horse and we need to return to a pre-AI existence.
It might be more productive for you to argue the case for why the effects or harm you're pointing to are somehow 'inherent' to AI itself and not symptoms of capitalism exacerbated by AI.