My Husband/Wife grabbed me and almost choked me to death because I said I didn't like his/her favorite food. My friends are saying I'm an asshole. AITA?
Since I used to run GPT-2 bots on Reddit (openly declared as such, in a bot-friendly sub, using LLMs so stupid/deranged nobody would mistake them for real accounts) I've been thinking about this problem for a long time. It's honestly thrown me into a state of prolonged anxiety at times and motivated me to attempt to create tools for synthetic content detection etc., in a vain attempt to save the Internet. And I've concluded that we're well past that point, and approaching the point at which we need to reconsider what, exactly, the internet really is, and that is to say that it should not be considered a source of any sort of authentic experience. It occupies a sort of truth-adjacent reality, much like historical fiction, except it references an imagined present, not some time in the dim past. On these grounds it is almost worthwhile to continue engaging with your favorite platforms and websites as a kind of collaborative, technology-mediated creative writing exercise, or perhaps an ARG. It doesn't feel quite so pointless, viewed through that lens.
"Don't feed the trolls" and defaulting to skepticism were part of the old internet. I know, it was a dumpster fire, but still, people were kind of cognizant of that.
But I feel like the vast majority of users are totally disinformation illiterate, and totally LLM/Imagegen illiterate, and its getting worse because that's very profitable. Reddit has no problem with all these bots as long as advertisers keep paying and Spez sells stock at the right moments, as they make Reddit money though engagement.
"dead internet theory" is great until the users start acting like it's not a symptom of the platforms they're using, and just the reality for all of the internet.
Reddit, Instagram, Facebook are all websites struggling to maintain user counts which bring ad revenue and investment. Since investors and ad platforms can't tell between real and fake users, there is MASSIVE incentive to allow bots on your website.
Moreso for sites like Reddit and Twitter that shit all over their user base and had to quickly mask their haemorrhaging support before the shareholders could complain.
Even before current LLM-style AI systems became mainstream, a noticeable portion of the most popular submissions on that and similar/related subs seemed to be "fake" to me. So, I'm not so sure AI alone changed that dynamic that much. One thing that seems to have changed, though, is that people are now more willing to believe a fake post is fake. There was a time when someone would question the authenticity of a submission, and there was a greater than 85% chance someone would call them out by saying "nothing ever happens" or linking to a sub of similar name.
On the other hand, I feel like a lot of people genuinely believe they have are much better at detecting AI generated text than they are. I've lost track of how many times I've had people reply to me by saying things like "Nice Chat-GPT you got there" or something along those lines. I mean, the typos alone should be a clue.
Maybe we should just completely deanonymise the internet, and have everyone sign up with their passport for every service. What if that's the only way to rescue it now? What if all our options now are to either wade through garbage in private, or interact with real people but without any form of anonymity?
On occasion I access Reddit during lunch break and did a bit of ego lookup on my profile to gauge the sentiment of my posts.
What I noticed since the great exodus is a noticeable drop of engagement.
At best I get 2-3 upvotes on posts that don't have that many comments.
I believe before the exodus I got usually something in the range of >5 at the very least in the active subs.