Researchers found CSAM within five minutes of searching Mastodon.
Mastodon, an alternative social network to Twitter, has a serious problem with child sexual abuse material according to researchers from Stanford University. In just two days, researchers found over 100 instances of known CSAM across over 325,000 posts on Mastodon. The researchers found hundreds of posts containing CSAM related hashtags and links pointing to CSAM trading and grooming of minors. One Mastodon server was even taken down for a period of time due to CSAM being posted. The researchers suggest that decentralized networks like Mastodon need to implement more robust moderation tools and reporting mechanisms to address the prevalence of CSAM.
While the study itself is a good read and I agree with the conclusions—Mastodon, and decentralized social media need better moderation tools—it’s hard to not read the Verge headline as misleading. One of the study authors gives more context here https://hachyderm.io/@det/110769470058276368. Basically most of the hits came from a large Japanese instance that no one federates with; the author even calls out that the blunt instrument most Mastodon admins use is to blanket defederate with instances hosted in Japan due to their more lax (than the US) laws around CSAM. But the headline seems to imply that there’s a giant seedy underbelly to places like mastodon.social[1] that are rife with abuse material. I suppose that’s a marketing problem of federated software in general.
There is a seedy underbelly of mainstream Mastodon instances, but it’s mostly people telling you how you’re supposed to use Mastodon if you previously used Twitter.
Yeah I recall that the Japanese instances have a big problem with that shit. As for the rest of us, Facebook actually open sourced some efficient hashing algorithms for use for dealing with CSAM; Fediverse platforms could implement these, which would just leave the issue of getting an image hash database to check against. All the big platforms could probably chip in to get access to one of those private databases and then release a public service for use with the ecosystem.
I'm not fully sure about the logic and perhaps hinted conclusions here. The internet itself is a network with major CSAM problems (so maybe we shouldn't use it?).
One of the problems with the fediverse is that each server keeps its own copy of the content. It is definitely a worry that bad actors push content to federated servers to get them taken down due to the content they now are storing.
So instances that are actually supporting CSAM material can and should be dealt with by law enforcement. That much is simple (and I'm surprised it hasn't been done with certain ... instances, to be honest). But I think the apparently less clearly solved issues have known and working solutions that apply to other parts of the web as well. No content moderation is perfect, but in general, if admins are acting in good faith, I don't think there should be too much of a problem:
For when federation inadvertently spreads some of the material through to other instances' databases: Isn't this the same situation as when ISP's used to cache web traffic to save on bandwidth costs? In that situation, too, browsed web pages would end up in the ISP's cache which could then harbour whatever material the user was looking at. As I recall, the ISP would just ban CSAM and other illegal material in their terms of service, and remove anyone reported as violating the rule, and that sufficed.
As for "bad" instances/users: It's impossible to block all instances and all users that might disseminate this material as you'd have to go to a "block everything, then allow known entities" rule which would break the Fediverse model. Again, users or site admins found to be acting in bad faith should be blocked and reported (either automatically or manually). Some may slip through the net, but as long as admins are seen to be doing the best they can, that should be enough.
There seem to be concerns about "surveillance" of material on Mastodon, which strikes me as a bit odd. Mastodon isn't a private platform. People who want private messaging should use an E2EE messaging app like Signal, not a social networking platform like Mastodon (or Twitter, Threads etc.). Mastodon data is already public and is likely already being surveilled, and will be so regardless of what anyone involved with the network wants, because there's no access control on it anyway. Having Mastodon itself contain code to keep the network clean, even if it only applies to part of the network, just allows those Mastodon admins who are running that part of the code to take some of the responsibility on themselves for doing so, reducing the temptation for third parties to do it for them.
The researchers suggest that decentralized networks like Mastodon need to implement more robust moderation tools and reporting mechanisms to address the prevalence of CSAM.
I agree, but who's going to pay for it? Those aren't just freely available additions to any application that you only need to toggle on.
I for one am all for instances being forcibly taken down by police if they can't moderate CSAM appropriately.
Moderation is a very real challenge. The internet at large aimed to solved it by centralizing everything to a few mega corps with AI moderation. The fediverse aims to solve it by keeping instances small and holding both mods and users accountable.
Is there any way mastodon stands out from other self hosted websites? Would the CSAM material be harder to distribute or easier to prosecute if they ran, say, a self-hosted bulletin board for it instead?
Hi, since Mastodon is no longer acceptable due to the 0.04 percent of instances found to have abusive material, would someone please suggest the alternative social network with 0 percent of these incidents? Companies like Facebook and Twitter are driven by shareholders and greed, Mastodon is a community effort and you’ll certainly find bad actors there, but I feel less dirty contributing to a community project, versus helping billionaires like Zuck and Elon line their pockets harvesting my data.
And the beauty of Mastodon is you can block an entire instance, as can your admin, when something awful is posted. Mastodon even has a hashtag they use as an alert for this kind of thing. (#Fediblock)
not surprised at all. this is a growing pain here too because this was previously a thing handled invisibly by platforms and federation makes it fall to individual sysadmins and whoever they have on staff. the tools for this stuff are, in general, not here yet--and as people have noted there are potential conflicts with some of the principles of federation introduced by those tools that can't be totally handwaved.
I don't trust stanford to not work on behalf of the CIA or other 3 alphabet orgs. They kind of turn a blind eye to CSA in churches but a federated media? This sounds like a smear job.
I browsed through an anime instance while trying to convince myself to like Mastodon and unfortunately I believe I've found some of this myself. I wasn't going to confirm it was real, I just reported and closed out but considering I've never seen such content on other websites and this instance was rife with it, I don't find this article hard to believe at all.
I think some of the problematic instances have been defederated, IIRC there's a large japanese instance that was defederated long time ago due to child abuse content. But still since I've been seeing increases of hate speech and dog whistling misogyny and homophobia in some instances, I won't be surprised if CSAM stuff has been trading under our noses.
The main issue is that, with so many users nowadays and small moderation teams, especially in the larger instances, it's hard to moderate and tackle CSAM problems effectively. I really wish larger instances would limit user registrations or start splitting off into smaller manageable ones.
Also, since they are trading using certain hashtags, blocking those hashtags might not be a bad idea.
Does this come to the surprise of anyone? This is the case on large social networks and decentralised networks by definition are free and open for anyone to moderate how they please. Unfortunately including pedos