LLMs are a net-negative for society as a whole. The underlying technology is fine, but it's far too easy for corporations to manipulate the populace with them, and people are just generally very vulnerable to them. Beyond the extremely common tendency to misunderstand and anthropomorphize them and think they have some real insight, they also delude (even otherwise reasonable) people into thinking that they are benefitting from them when they really.... Aren't. Instead, people get hooked on the feelings they give them, and people keep wanting to get their next hit (tokens).
can we agree that 90% of the problem with LLM are capitalism and not the actual technology?
after all, the genie is out of the bottle. you can't destroy them, there are open source models. even if you ban them, you'll still have people running them locally.
oh no what will we do, the open source leaded gasoline was released. the genie is out of the bottle, even if you ban it you'll still have people using it locally
can we agree that 90% of the problem with cigarettes are capitalism and not the actual smoking?
after all, the genie is out of the bottle. you can’t destroy them, there are tobacco plants grown at home. even if you ban them, you’ll still have people hand-rolling cigarettes.
it’s fucking weird how I only hear about open source LLMs when someone tries to make this exact point. I’d say it’s because the open source LLMs fucking suck, but that’d imply that the commercial ones don’t. none of this horseshit has a use case.
Frankly yes. In a better world art would not be commodified and the economic barriers that hinder commissioning of art from skilled human artists in our capitalist system would not exist, and thus generative AI recombining existing art would likely be much less problematic and harmful to both artists and audiences alike.
But also that is not the world where we live, so fuck GenAI and its users and promoters lmao stay mad.
i for sure agree that LLMs can be a huge trouble spot for mentally vulnerable people and there needs to be something done about it
my point was more on him using it to do his worst-of-both-worlds arguments where he's simultaneously saying that 'alignment is FALSIFIED!' and also doing heavy anthropomorphization to confirm his priors (whereas it'd be harder to say that with something that's more leaning towards maybe in the question whether it should be anthro'd like claude since that has a much more robust system) and doing it off the back of someones death
@Anomalocaris@visaVisa The attention spent on people who think LLMs are going to evolve into The Machine God will only make good regulation & norms harder to achieve