To be fair, though, this experiment was stupid as all fuck. It was run on /r/changemyview to see if users would recognize that the comments were created by bots. The study's authors conclude that the users didn't recognize this. [EDIT: To clarify, the study was seeing if it could persuade the OP, but they did this in a subreddit where you aren't allowed to call out AI. If an LLM bot gets called out as such, its persuasiveness inherently falls off a cliff.]
Except, you know, Rule 3 of commenting in that subreddit is: "Refrain from accusing OP or anyone else of being unwilling to change their view, of using ChatGPT or other AI to generate text, [emphasis not even mine] or of arguing in bad faith."
It's like creating a poll to find out if women in Afghanistan are okay with having their rights taken away but making sure participants have to fill it out under the supervision of Hibatullah Akhundzada. "Obviously these are all brainwashed sheep who love the regime", happily concludes the dumbest pollster in history.
I don't think so. Yeah the researchers broke the rules of the subreddit but it's not like every other company that uses AI for advertising, promotional purposes, propaganda, and misinformation will adhere to those rules.
The mods and community should not assume that just because the rules say no AI does not mean that people won't use it for nefarious purposes. While this study doesn't really add anything new we didn't already know or assume, it does highlight how we should be vigilant and cautious about what we see on the Internet.
It’s like creating a poll to find out if women in Afghanistan are okay with having their rights taken away but making sure participants have to fill it out under the supervision of Hibatullah Akhundzada. “Obviously these are all brainwashed sheep who love the regime”, happily concludes the dumbest pollster in history.
I don't particularly like this analogy, because /r/changemyview isn't operating in a country where an occupying army was bombing weddings a few years earlier.
But this goes back to the problem at hand. People have their priors (my bots are so sick nasty that nobody can detect them / my liberal government was so woke and cool that nobody could possibly fail to love it) and then build their biases up around them like armor (any coordinated effort to expose my bots is cheating! / anyone who prefers the new government must be brainwashed!)
And the Bayesian Reasoning model fixates on the notion that there are only ever a discrete predefined series of choices and uniform biases that the participant must navigate within. No real room for nuance or relativism.
.world is known (largely due to the Luigi Mangione stuff) to have moderation that's a bit more heavy handed and more similar to the sort of "corporate Internet".
No real hate for them and they've indicated in the past that some of their actions are just to comply with their local laws. But if you're looking for an older internet experience you'll wanna move to a different instance.
Belief doesn't even have to factor; it's a plain-as-day truth. The sooner we collectively accept this fact, the sooner we change this shit for the better. Get on board, citizen. It's better over here.
I worry that it's only better here right now because we're small and not a target. The worst we seem to get are the occasional spam bots. How are we realistically going to identify LLMs that have been trained on reddit data?
Seems dangerous, it's a breach of the ToS I assume so they're opening up to possible liability if Reddit got pissy. I'm actually surprised this kind of research gets IRB and other approval given you're violating ToS unless given a variance from it (I used to conduct research on social networks and had to get preapproved accounts for the purpose, and the data I was given was carefully limited.)
So they banned the people that successfully registered a bunch of AI bots and had them fly under the mods radar. I'm sure they're devastated and will never be able to get on the site again...