‘The Worst Internet-Research Ethics Violation I Have Ever Seen’ | The most persuasive “people” on a popular subreddit turned out to be a front for a secret AI experiment.
There's no guarantee anyone on there (or here) is a real person or genuine. I'll bet this experiment has been conducted a dozen times or more but without the reveal at the end.
If anyone wants to know what subreddit, it's
r/changemyview. I remember seeing a ton of similar posts about controversial opinions and even now people are questioning Am I Overreacting and AITAH a lot. AI posts in those kind of subs are seemingly pretty frequent. I'm not surprised to see it was part of a fucking experiment.
The ethics violation is definitely bad, but their results are also concerning. They claim their AI accounts were 6 times more likely to persuade people into changing their minds compared to a real life person. AI has become an overpowered tool in the hands of propagandists.
The reason this is "The Worst Internet-Research Ethics Violation" is because it has exposed what Cambridge Analytica's successors already realized and are actively exploiting. Just a few months ago it was literally Meta itself running AI accounts trying to pass off as normal users, and not an f-ing peep - why do people think they, the ones who enabled Cambridge Analytica, were trying this shit to begin with. The only difference now is that everyone doing it knows to do it as a "unaffiliated" anonymous third party.
Holy Shit... This kind of shit is what ultimately broke Tim(very closely ralated to ted) kaczynski.... He was part of MKULTRA research while a student at Harvard, but instead of drugging him, they had a debater that was a prosecutor pretending to be a student.... And would just argue against any point he had to see when he would break....
I’m sure there are individuals doing worse one off shit, or people targeting individuals.
I’m sure Facebook has run multiple algorithm experiments that are worse.
I’m sure YouTube has caused worse real world outcomes with the rabbit holes their algorithm use to promote. (And they have never found a way to completely fix the rabbit hole problems without destroying the usefulness of the algorithm completely.)
The actions described in this article are upsetting and disappointing, but this has been going on for a long time. All in the name of making money.
This is probably the most ethical you'll ever see it. There are definitely organizations committing far worse experiments.
Over the years I've noticed replies that are far too on the nose. Probing just the right pressure points as if they dropped exactly the right breadcrumbs for me to respond to. I've learned to disengage at that point. It's either they scrolled through my profile. Or as we now know it's a literal psy-op bot. Already in the first case it's not worth engaging with someone more invested than I am myself.
AI is a fucking curse upon humanity. The tiny morsels of good it can do is FAR outweighed by the destruction it causes. Fuck anyone involved with perpetuating this nightmare.
This is a really interesting paragraph to me because I definitely think these results shouldn't be published or we'll only get more of these "whoopsie" experiments.
At the same time though, I think it is desperately important to research the ability of LLMs to persuade people sooner rather than later when they become even more persuasive and natural-sounding. The article mentions that in studies humans already have trouble telling the difference between AI written sentences and human ones.
When researchers asked the AI to personalize its arguments to a Redditor’s biographical details, including gender, age, and political leanings (inferred, courtesy of another AI model, through the Redditor’s post history), a surprising number of minds indeed appear to have been changed. Those personalized AI arguments received, on average, far higher scores in the subreddit’s point system than nearly all human commenters
I dabble in conversational AI for work, and am currently studying its capabilities for thankfully (imo at least) positive and beneficial interactions with a customer base.
I've been telling friends and family recently that for a fairly small amount of money and time investment, I am fairly certain a highly motivated individual could influence at a minimum a local election. Given that, I imagine it would be very easy for Nations or political parties to easily manipulate individuals on a much larger scale, that IMO nearly everything on the Internet should be suspect at this point, and Reddit is atop that list.
When Reddit rebranded itself as “the heart of the internet” a couple of years ago, the slogan was meant to evoke the site’s organic character. In an age of social media dominated by algorithms, Reddit took pride in being curated by a community that expressed its feelings in the form of upvotes and downvotes—in other words, being shaped by actual people.
Not since the APIcalypse at least.
Aside from that, this is just reheated news (for clicks i assume) from a week or two ago.
[...] I read through dozens of the AI comments, and although they weren’t all brilliant, most of them seemed reasonable and genuine enough. They made a lot of good points, and I found myself nodding along more than once. As the Zurich researchers warn, without more robust detection tools, AI bots might “seamlessly blend into online communities”—that is, assuming they haven’t already.
The University of Zurich’s ethics board—which can offer researchers advice but, according to the university, lacks the power to reject studies that fall short of its standards—told the researchers before they began posting that “the participants should be informed as much as possible,” according to the university statement I received. But the researchers seem to believe that doing so would have ruined the experiment. “To ethically test LLMs’ persuasive power in realistic scenarios, an unaware setting was necessary,” because it more realistically mimics how people would respond to unidentified bad actors in real-world settings, the researchers wrote in one of their Reddit comments.
This seems to be the kind of a situation where, if the researchers truly believe their study is necessary, they have to:
accept that negative publicity will result
accept that people may stop cooperating with them on this work
accept that their reputation will suffer as a result
ensure that they won't do anything illegal
After that, if they still feel their study is necesary, maybe they should run it and publish the results.
If then, some eager redditors start sending death threats, that's unfortunate. I would catalouge them, but not report them anywhere unless something actually happens.
As for the question of whether a tailor-made response considering someone's background can sway opinions better - that's been obvious through ages of diplomacy. (If you approach an influential person with a weighty proposal, it has always been worthwhile to know their background, think of several ways of how they might perceive the proposal, and advance your explanation in a way that relates better with their viewpoint.)
AI bots which take into consideration a person's background will - if implemented right - indeed be more powerful at swaying opinions.
As to whether secrecy was really needed - the article points to other studies which apparently managed to prove the persuasive capability of AI bots without deception and secrecy. So maybe it wasn't needed after all.
Another isolated case for the endlessly growing list of positive impacts of the GenAI with no accountability trend. A big shout-out to people promoting and fueling it, excited to see into what pit you lead us next.
This experiment is also nearly worthless because, as proved by the researchers, there's no guarantee the accounts you interact with on Reddit are actual humans. Upvotes are even easier for machines to use, and can be bought for cheap.
You know what Pac stands for? PAC. Program and Control. He’s Program and Control Man. The whole thing’s a metaphor. All he can do is consume. He’s pursued by demons that are probably just in his own head. And even if he does manage to escape by slipping out one side of the maze, what happens? He comes right back in the other side. People think it’s a happy game. It’s not a happy game. It’s a fucking nightmare world. And the worst thing is? It’s real and we live in it.
I think it's a straw-man issue, hyped beyond necessity to avoid the real problem. Moderation has always been hard, with AI it's only getting worse. Avoiding the research because it's embarrassing just prolongs and deepens the problem
I do like the short or punchy one after reviewing many bots comments over the years, but, who's to say using LLM's to tidy up your rantings is a "bad thing"?
This just shows how gullible and stupid the average Reddit user is. There's a reason that there's so many meme's mocking them and calling them beta soyjacks.