Since Meta announced they would stop moderating posts much of the mainstream discussion surrounding social media has been centered on whether a platform has a responsibility or not for the content being posted on their service. Which I think is a fair discussion though I favor the side of less moderation in almost every instance.
But as I think about it the problem is not moderation at all: we had very little moderation in the early days of the internet and social media and yet people didn’t believe the nonsense they saw online, unlike nowadays were even official news platforms have reported on outright bullshit being made up on social media. To me the problem is the godamn algorithm that pushes people into bubbles that reinforce their correct or incorrect views; and I think anyone with two brain cells and an iota of understanding of how engagement algorithms works can see this. So why is the discussion about moderation and not about banning algorithms?
Algorithms can be useful - and at a certain scale they’re necessary. Just look at Lemmy - even as small as it is there’s already some utility in algorithms like “Active”, “Hot” and “Scaled”, and as the number of communities and instances grows they’ll be even more useful. The trouble starts when there are perverse incentives to drive users toward one type of content or another, which I think is one of the fediverse’s key strengths.
But correct me if I’m wrong (I’m not a programmer), lemmy’s algorithm is basically just sorting; it doesn’t choose over two pieces of media to show me but rather how it orders them. Facebook et al will simply not show content that I will not engage with or that will make me spend less time on the platform.
I agree that they are useful but at a certain point we as a society sometimes need to weight the usefulness of certain technologies against the potential for harm. If the potential for harm is greater than the benefit, then maybe we should somewhat curb the potential for that harm or remove it altogether.
So maybe we could refine the argument to be we need to limit what signals algorithms can use to push content? Or maybe that all social media users should have access to an algorithm free feed and that the algorithm driven feed be hidden by default and can be customizable by users?
Algorithm is just a fancy word for rules to sort by. "New" is an algorithm that says "sort by the timestamp of the submissions". That one is pretty innocuous, I think. Likewise "Active" which just says "sort by the last time someone commented" (or whatever). "Hot" and "Scaled", though, involve business logic -- rules that don't have one technically correct solution, but involve decisions and preferences made by people to accomplish a certain aim. Again in Lemmy's case I don't think either the "Hot" or "Scaled" algorithms should be too controversial -- and if they are, you can review the source code, make comments or a PR for changes, or stand up your own Lemmy instance that does it the way you want to. For walled-garden SM sites like TikTok, Facebook and Twitter/X, though, we don't know what the logic behind the algorithm says. We can speculate that it's optimized to keep people using the service for longer, or encouraging them to come back more frequently, but for all intents and purposes those algorithms are black boxes and we have to assume that they're working only for the benefits of the companies, and not the users.
I think you're making a lot of assumptions here, many of which I have contentions with.
we had very little moderation in the early days of the internet and social media
It differed from site to site, but in my experience of the Internet in the '90s and '00s, a lot of forums were heavily moderated, and even Facebook was kept pretty clean when I got on it in ~2006/2007.
and yet people didn’t believe the nonsense they saw online,
I fully dispute this. People have always believed hearsay. They're just exposed to more of it through the web instead of it coming verbally from your family, friends, and coworkers.
unlike nowadays were even official news platforms have reported on outright bullshit being made up on social media.
We live in a world of 24-hour news cycles and sensationalization, which has escalated over the past few decades. This often encourages ratings over quality.
Mainstream media has always had problems with fact-check. I'm not trying to attack the news media or anything, I think most reporters do their best and strive to be factual, but they sometimes make mistakes. I can't remember the name of it, but I there's some sort of phenomenon where if you watch a news broadcast, and they talk about a subject you have expertise in, you're likely to find inaccuracies in it, and be more skeptical of the rest of the broadcast.
To me the problem is the godamn algorithm that pushes people into bubbles that reinforce their correct or incorrect views
Polarization is not limited to social media. The news media has become more and more tribal over time. Company that sell products and services have been more likely to present a political world-view.
Overall, I think you're ignoring a lot of other things that have changed over the years. It's not like the only thing that has changed in the world is the algorithmic feed. We are perpetually online now and that's where most people get their news, so it's only natural that would also be their source of disinformation. I think algorithmic feeds that push people into their bubbles is a response to this polarization, not the source of it.
How would you identify the kinds of algorithms that should be banned, as opposed to all the other kinds of algorithms? I have a feeling that would be tricky.
The easy answer for me would be to ban algorithms that have the specific intent of maximizing user time spent on the app. I know that’s very hard to define legally. Maybe like I suggested below we can ban what kinds of signals algorithms can use to suggest and push content?
To do it based on intent would create some difficult grey areas - for example, video game creators would have to try to make their games as compelling as possible without passing a more or less vague threshold and breaking the law. The second approach of working on the ways different types of data can be used sounds more promising.
ban algorithms that have the specific intent of maximizing user time spent on the app.
That just means make the app shitty. You can optimize for engagement without just trying to make users angry. Making users angry at each other is just an extremely effective way to boost engagement.
Nah. It is just people, including me, don't wanna to think too much about the information when it is present to us. Most like to read just the headline and make a conclusion. It is the laziness in thinking and emotional reaction that makes this whole situation worse.
Algorithm (recomendation engines) is just a catalyst.
Those mega corporations have intentionally misused the term “algorithm” which implies an unbiased method of ranking or sorting. What they are actually using is more like a human curated list of items to promote that supports their self serving goals.
It would be really nice if at the very least we could get some insight into how algorithms are tuned. It seems obvious that Facebook and X want users to get pissed off. It does not seem ethical at all and should at the very least be examined
While transparency would be helpful for discussion, I don’t think it would change or help with stopping propaganda, misinformation and outright bullshit from being disseminated to the masses because people just don’t care. Even if the algorithm was transparently made to push false narratives people would just shrug and keep using it. The average person doesn’t care about the who, what or why as long as they are entertained. But yes, transparency would be a good first step.