The researchers' bots generated identities as a sexual assault survivor, a trauma counselor, and a Black man opposed to Black Lives Matter.
A team of researchers who say they are from the University of Zurich ran an “unauthorized,” large-scale experiment in which they secretly deployed AI-powered bots into a popular debate subreddit called r/changemyview in an attempt to research whether AI could be used to change people’s minds about contentious topics.
more than 1,700 comments made by AI bots
The bots made more than a thousand comments over the course of several months and at times pretended to be a “rape victim,” a “Black man” who was opposed to the Black Lives Matter movement, someone who “work[s] at a domestic violence shelter,” and a bot who suggested that specific types of criminals should not be rehabilitated.
The experiment was revealed over the weekend in a post by moderators of the r/changemyview subreddit, which has more than 3.8 million subscribers. In the post, the moderators said they were unaware of the experiment while it was going on and only found out about it after the researchers disclosed it after the experiment had already been run. In the post, moderators told users they “have a right to know about this experiment,” and that posters in the subreddit had been subject to “psychological manipulation” by the bots.
Gotta love the hollow morality in telling users they’ve been psychologically manipulated this time but yet do nothing about the tens of thousands of bots doing the exact same thing 24/7 the rest of the time.
r/changemyview is one of those "fascist gateway" subs, or at least one of the subs that I suspect as that. The gateway works by introduing "controverial" topics which are really fascist but they get upvote because "look at this idiot!". But slowly it moves the overton window and gives those people who actually do believe in inequality a space to grow in. Opinions that are racist, anti-feminist, anti-trans, ultra-nationalist, authoritarian or generally against equality and justice.
Reddit was slowly boiled over the last decade like a frog.
And you can be absolutely sure that not only researchers are researching this for science, there are plenty of special interests that do this. It really started with climate change denial.
Feels like AI would really excel in this. It's personalized argumentation that can basically auto complete for the most statistically likely (ie popular) version of an argument. CMV posts largely aren't unique, there's a lot of prior threads to draw from which got deltas.
I remember when my front page was nothing but r/changemyview for like a week and I just unsubscribed from the subreddit completely because some of the questions and the amount of hits felt like something fucky was going on. Guess I was right.
Same happened to the relationshipsadvice and aita subreddits, the number of posts suddenly skyrocketed with incredibly long, overly-detailed stories that smacked of LLM-generated content.
To be fair, I can see how it being "unauthorized" was necessary to collecting genuine data that isn't poisoned by people intentionally trying to soil the sample data.
It'd be pretty trivial to do the same here, 1700 or so comments over 'several months', is less than 25 a day. No need even for bot posting, have the LLM ingest the feed, spit out the posts and have the intern make accounts and post them.