Captchas are already pretty weak to combat bots. It's why recaptcha and others were invented. The people who run bots, spend lots of money for their bots to.... bot. They have accessed to quite advanced modules for decoding captchas. As well, they pay kids in india and africa pennies to just create accounts on websites.
I am not saying captchas are completely useless, they do block the lowest hanging fruit currently. That- being most of the script kiddies.
Email domain filters.
Issue number one, has already been covered below/above by others. You can use a single gmail account, to basically register an unlimited number of accounts.
Issue number two. Spammers LOVE to use office 365 for spamming. Most of the spam I find, actually comes from *.onmicrosoft.com inboxes. its quick for them to spin it up on a trial, and by the time the trial is over, they have moved to another inbox.
Autoblocking federation for servers who don't follow the above two broken rules
This is how you destroy the platform. When you block legitimate users, the users will think the platform is broken. Because, none of their comments are working. They can't see posts properly.
They don't know this is due to admins defederating servers. All they see, is broken content.
At this time, your best option is for admin approvals, combined with keeping tabs on users.
If you notice an instance is offering spammers. Lets- use my instance for example- I have my contact information right on the side-bar, If you notice there is spam, WORK WITH US, and we will help resolve this issue.
I review my reports. I review spam on my instance. None of us are going to be perfect.
There are very intelligent people who make lots of money creating "bots" and "spam". NOBODY is going to stop all of it.
The only way to resolve this, is to work together, to identify problems, and take action.
Nuking every server that doesn't have captcha enabled, is just going to piss off the users, and ruin this movement.
One possible thing that might help-
Is just to be able to have an easy listing of registered users in a server. I noticed- that actually... doesn't appear to be easily accessible, without hitting rest apis or querying the database.
First of all: I'm posting this from my .ml alt. Because i can not do it from my .world main. That i can't do it, i found out just because i was waiting for a response on a comment where is was sure that the OP would respond. After searching, i found out that my comment and my DM's never where federated to .ml.
So, that said: I'm all for defederating bad instances, i'm all for separation where it makes sense. BUT:
If an instance is listed on join-lemmy, this should work as the normal user would expect
We are not ready for this yet. We are missing features (more details below)
Even instances that officialy require applications, can be spam instances (admins can do what ever they want), so we would need protection against this anyways. Hell, one could just implement spam bots that talk directly federation protocol, and wouldn't even need lemmy for this ...
Minimal features we need:
Show users that the community they try to interact with is on a server that defederated the users instance
Forbid sending DM's to servers that are not fully federated
Currently, all we do is: Make lemmy look broken
And before someone starts with: "Then help!", i do. I do in my field of expertice. I'm a PostgreSQL Professional. So i have build a setup to messure the lemmy SQL performance, usage patterns, and will contribute everything i can to make lemmy better.
(I tried rust, but i'm to much C++ guy to bring something usefull to the table beyond database stuff, sry :( )
Everyone is talking about how these things won't work. And they're right, they won't work 100% of the time.
However, they work 80-90% of the time and help keep the numbers under control. Most importantly, they're available now. This keeps Lemmy from being a known easy target. It gives us some time to come up with a better solution.
This will take some time to sort out. Take care of the low hanging fruit first.
Look up the origins of IRC's EFNet, which was created specifically to exclude a server that allowed too-easy federation and thus became an abuse magnet.
Auto-block federation from servers that don't respect.
NO! Do NOT defederate due to how an instance chooses to operate internally. It is not your concern. You should only defederate if this instance causes you repeated trouble offenses. Do not issue pre-emprive blanket blocks.
Isn't this what all you lemmy-worlders got mad at Beehaw for doing? I don't think it's unreasonable to ask for a small statement from people as an anti-spam measure (a sort of advanced captcha), though of course the big problem there is reviewing all the applications in a timely manner. Still, I think there's room for more and less exclusive instances. The tools are there for instance owners to protect their instances however they choose.
Please stop trying to tell me how to run my instance. If I wanted your input or your rules I would have joined your instance.
If you have a problem with my instance you're in your right to defederate or block me. But I do not care about your plea to enable shit that I don't want to enable.
What I will say is that congrats! You've shown that you're willing to bot manipulate your post. That earns a ban from my instance! That's the glory of the Fediverse.
for larger instances, this makes sense. For us smaller instances, just add a custom application requirement that isn't about reddit. though i'll be adding captcha too if they keep at it (every hour, 2 bots apply).
I've seen bots trying to create accounts, it's the same boring message about needing a new home because "random reason about reddit". I'll borrow a quote from Mr Samuel Jackson: "I don't remember asking you a god damn thing about reddit"... and application is denied.
Mine got blown up a day or two ago before I had enabled Captch. About 100 accounts were created before I started getting rate-limited (or similar) by Google.
Better admin tools are definitely needed to handle the scale. We need a pane of glass to see signups and other user details. Hopefully it’s in the works.
We need a distributed decentralized curated whitelist that new servers will apply to be on it and hopefully get a quick week max response after some kind of precisely defined anti spam/bot audit. Also then periodic checks of existing servers.
Like crypto has transaction ledger confirmed some kind of notabot confirmation ledger chain.
Weak side if bot servers get on whitelist somehow in enough numbers they can poison it
Mind you this whitelist chain has nothing to do with content itself just whether it is AI or human
Just saw a WAVE of bot art flow down the "Top New" feed. It then promptly stopped. And then when I reloaded the page, it was gone. So I think it's working...
['Man vs. Giant' - Dramatic artwork depicting 'Yhorm the Giant' from 'Dark Souls 3' towering over the protagonist from 'Dark Souls 3', 'Ashen One'. The giant figure holds a massive sword that is planted in the ground with both hands, while the comparitively tiny 'Ashen One' holds a regular sized sword in his right hand and adopts a fighting stance. Text placed over the stomach of the giant character, and over the smaller protagonist figure, reads as follows]
BOTS
LEMMY
^I'm a human volunteer transcribing posts in a format compatible with screen readers, for blind and visually impaired users!^
Is it possible to require an authentication app or something to make an account? Require a specific score on a flash game like snake? Or is that stupid? I don't know, I'm not a dev ¯_(ツ)_/¯