The original "Eternal September" (on Usenet) wasn't an influx of abusers. It was an influx of new users who didn't know how to do things properly yet.
Most of the new users were from the America Online (AOL) private service, and known as "AOLers".
The AOLers didn't know which aspects of the service as they saw it were due to the AOL custom client software, which were due to the AOL local server, which were due to the newsgroup (forum) they were looking at, and which were due to the global Usenet consensus. So when they had a problem, they didn't know where to address that problem. They complained on public newsgroups about UI issues with their local client, because they didn't know what was what.
And the existing users didn't have the time or capacity to help them. The AOLers were added to Usenet en-masse without preparation. Nobody had signed up to help them. The AOLers were accustomed to AOL chat rooms that had staff helpers and moderators; most of Usenet did not have any — just regularly-posted FAQ documents, which the AOLers did not know to look for, and grouchy users who angrily told them to read the goddamn FAQ before posting.
Another consequence of the influx of new folks was that Usenet suddenly just had a lot more people. This made it a tasty target for commercial spammers and other abusers; which led to the eventual spampocalypse and a lot of people abandoning Usenet for web forums or other services.
It wasn't long into Eternal September that the hardcore abusers showed up, though. That, I think, is the harder problem to deal with.
"Good" Usenet servers did not reliably disconnect themselves from the servers that were accepting and forwarding spam. It was not generally acknowledged that a good server needs to block bad servers: the free-speech ideal was assumed to mean "accept anything from anyone; let the client decide what to filter out" — which meant that new users who had not written any filters necessarily saw all the spam.
And because nothing was secured by strong encryption, forgery was rampant; with a little cleverness, anyone could pretend to be anyone from any server.
There were many, many efforts to fix the spam problem. Unfortunately, as things turned out, it wasn't enough. Eventually folks noticed that the NNTP facility offered by their ISPs was a great means for sharing pirated porn ....
I think it's important to enable account portability across instances, like what Mastodon has. It should be easy for people to move to a different community, back up their data so they can re-substantiate their known persona if their instance goes poof, etc.
When the Eternal September comes, which it will, how does a Lemmy instance deal with bad actors?
i'll bully them away >:3 !!!
On the real I feel like Lemmy/the wider linkagg fediverse will prob be good at self-moderating somewhat like other fediverse software's communities are. It'll probably be easier for admins to noice bad actors on their instance than it was for site admins on Reddit to notice bad actors there because the admins-to-users ratio on here will probably be better, even if things are kinda concentrated on lemmy.ml, lemmy.world and beehaw right now (people will probably spread out as they get a grip on how things work), and the average user will probably grow a stronger connection with their instance admins for that reason too, making it easier to address things like that since more people will be able to comfortably contact their admins directly. And if said bad actor is from another instance, and the admins of that instance refuse to deal with them, there's always community-level bans (I think anyways? I'm still not familiar with the comm mod tools) and, if more drastic measures are needed, defederation.
Individual instances will have to moderate themselves. If they become chaotic, other instances should unfederate them. But as users, you should also subscribe to communities you think are behaving well and block users/communities that are not.
Also, I have seen some users who are "grabbing" as many communities as possible, namely @Hurts@lemmy.world. Dude is moderating 60 communities, in an instance that started a few days ago.. He is not building the communities, he is just power tripping it seems. @ruud@ruud@lemmy.world, something might have to be done about that in the future. I suggest some sort of "requestcommunity", in which you can apply to become the mod of said community, if community is being badly run (or not run at all).
Ban them. Honestly if it's egregious the admin staff takes care of it. If it's just some asshattery then the mods of the communities are left to deal with it.
Hopefully all the assholes are attracted to one shitty instance and then that instance gets defederated.
Srsly tho, the assholes are kind of apart of the whole experience, but I think the people being drawn over here right now are not really the asshole type, at least so far.
I know it is part of the Fediverse, but I wish bots were a not thing or allowed. I know they are not 'assholes' but I just think they take away from having real human connections.
I think we just collectively need to learn how to act better.
Choose not to respond when people are agressively onesided, you won't be changing their minds.
We cannot control assholes or trolls, but we can control our behaviors. Stay kind as long as possible, disengage when you can't. Don't let these idiots turn YOU into an asshole.
We'll live, we'll see. Meta is showing its interest in mastodon, so we have a reason to worry. But I think, lemmy will change according to the situation, when situation will be present, not before it.
Is there an equivalent of "going dark" in lemmy? Like if there is some "global" or "fediverse" issue that communities want to protest, is there the same option as back on Reddit that they are using now?
We still have voting, mods, and admins. Mods can take action for bad content, admins can take action for chronic offenders, or if mods aren't taking care of a community. And worst case scenario, if an instance is causing trouble as a whole, it can be defederated.
Down the line, I think we'll see spam lists to help deal with people creating lots of spam instances, like email has.
idea: let each instance have a prepopulated blocklist
let the admins of each instance have a list of blocked users that gets inherited to members of that instance, but let users remove from that list as well as add to avoid abuse. and don't hide the comments from these users, just collapse them to let people know a comment has been hidden in case of mistakes
(possibly even allow regex to avoid RandomWord1234, which was common on reddit)
this is a rather extreme tactic though, only for if spam becomes overwhelming