“We have done this in the past for quarantined communities and found that it did help to reduce exposure to bad content, so we are experimenting with this sitewide,” according to the main post. Reddit “may consider” expanding the warnings in the future to cover repeated upvotes of other kinds of actions as well as taking other types of actions in addition to warnings.
Thoughtcrime time.
Bigger picture - what if Xitter, Meta and Reddit (all run by Trump humpers) started centrally compiling this kind of thing to flag up "persons of interest"?
Because an AI indiscriminately saw your comment and looked for keywords and just issued a warning, that's what they are not telling to people. I had the same thing on another sub, except the mods also were involved so it was a ban, reddit rules are vague ASF
The thing is, these recent ban waves they have been going after the low hanging fruits. Of accounts, small advertisers, and not the problematic ones like the state sponsored political troll accounts, at least not in large numbers, both by RU and the US, but we know they represent a large amount of traffic on the site. Many articles posted on Reddit was pointed out as being a bot too.
Honestly I wouldn't be surprised if this started happening at Lemmy too. Its a lot easier to control what kind of content is on a platform when you do something like this.
Now, I don't particularly think this is a good idea, but I can see the benefit of this as well. People have the freedom to upvote whatever they choose, even if I think they are dumb for doing it, and they shouldn't have to worry about anyone other than law enforcement or lawyers (in extreme edge cases) using that information against them.
Honestly I wouldn’t be surprised if this started happening at Lemmy too. Its a lot easier to control what kind of content is on a platform when you do something like this.