We're aware of the spam attack hitting mastodon.social right now and our full moderation and DevOps teams are on the case mitigating any way we can (incl. switching to approval-mode registrations)
Mastodon (and the Fediverse) have grown, now spam and bots are going to become a more common and challenging threat to instances
Something to consider with regards to spam is this: The Fediverse works very similar to email. Spam has never been solved for email, even after decades of trying all sorts of ever more complicated mechanisms. I expect it to become an equally painful and challenging problem for Fedi.
I don't think it's comparable since fediverse is human moderated and email is not. Messages and accounts on the fediverse are public, so moderators can remove spam before other people see it.
Accounts on existing instances can be removed and dedicated spam instances can be blocked. The only thing it does is increase the toll on moderators.
I think a big problem specifically with mastodon.social is its sheer size. I think it's harder to effectively moderate it because of it. If the spam attack was directed at a smaller instance it would also be easy for other instances to hide or block it until it'd be sorted out, so the surface area the spam would reach would be smaller.
Spam on Fedi is @-ing people and sending the message to their instances. Unless an automated system rejects these messages before delivery, the user will get an notification. Just like email.
The big difference is that posts on the fediverse are authenticated. You know exactly from which account a message originates. You don’t have that for emails. The sender of an email can be trivially forged.
Email does Sender Verification too. And in the case of Fedi it just means there must be an account at the sender's domain. It doesn't have to be human. And as long as the spammers can keep generating domains they can keep sending spam...
Probably a conspiracy theory, but given the push-back to this, I wouldn't be too surprised if the spam attack is being done by a disgruntled person that wants to force other instances to defederate from mastodon.social.
Absolutely. I'd be very happy to see this community grow bigger, but there will be chances of increased spam and bot attacks the larger it gets.
The key to avoiding malicious exploits from screwing everyone over is to keep the codebase open-source (and resources spread out), so that any vulnerability can be identified quickly and patched.
To my knowledge that's how Linux has been able to stay relatively virus free in the user sphere. Obviously there are shell scripts that can instantly crash or wipe your computer if you run them, and privilege escalation bugs that have been found and fixed, but generally Linux has been much better than Windows in that regard which still has some DOS-era quirks.
You don't need an exploit to send spam however. Anyone can currently write a client which posts spam messages to an ActivityPub instance. It is a weakness of any open federated service. The alternative would be a closed system where moderators would first have to approve any instance federation, but that'd be a very different and insular Fediverse...
I think ultimately we'll end up with very Email-like mitigations. Blacklists (spamhaus), message content heuristics, sender verification, etc.
Almost every social media site (FB, Reddit, YT, Twitter) and online newspaper comment section has a good share of spam and harassment anyway, it's up to "an algorithm", moderator, or verification system to remove as much as feasible.
I'm more thinking of servers unexpectedly down or purged, people hijacking or spoofing others' profiles, etc. which the way the Fediverse network is setup should make it a little more resilient overall.