From my experience modding on Reddit, it's generally a good idea not to engage with spam comments at all. Just downvote and report them. The ideal outcome is they just silently disappear when the mods get them, and if they've been otherwise ignored it's super easy. If there's a comment chain underneath it, it starts to take more thought and ends up getting messier as it involves either removing ok comments from other users or removing the context for those comments.
Same thing for human trolls. Downvoting is good, but once you engage in them the removal gets more tedious, especially since troll threads tend to spiral out of control. Modding is done by volunteers, make it easy for them, especially since responding to these things usually has very little value. Obvious spam and trolling is obvious to everyone and the downvotes signal to other people to not take it seriously.
If you are concerned, please flag it. We'll look into it as soon as possible. Obviously, we don't have round-the-clock coverage. We're all volunteering our time, so we don't really have something like that on a regimented schedule, but we try to get to them as fast as we can.
based and the correct choice. it’s just been a new way to dehumanize and it’s never appropriate. just report it if you are genuinely concerned about bot activity, everything else is just nasty.
I think that public call-outs of suspicious behavior is the only real and continuous way to teach new or under-informed users what bots and disinformation actors (ESPECIALLY these) sound like. I don’t remember the last time I personally called out someone I thought was a paid/malicious account or a bot… maybe never have on Lemmy. But despite the incivility, I truly believe the publicity of these comments is good for creating a resilient community.
I’ve been on forums or aggregators similar to Lemmy for a long time, and I think I have a pretty good radar when it comes to identifying suspicious account behavior. I think reading occasional accusations from within your community help you think critically about what’s being espoused in the thread, what the motivations of different users are, and whether to disbelieve or believe the accuser.
Yes, sometimes it’s used as a personal attack. But it’s better to have it out in the open so that the reality of online discourse (extremely frequent attempted manipulation of opinions) is clear to everyone, and the community can respond positively or negatively to it and organically support users that are likely victims.
You must have missed my point, which was entirely about education of new and under-informed users. Reporting is invisible and does not have that benefit.
Valid point, but leaving thins as is does not seem like the optimal solution. Maybe the mods could occasionally post examples of removed spam/bot content, for transparency and awareness. Leaving this to random users can end with more mistakes and actual abuse.
Also, the troll/bot comments and discussion around them will less disturbing outside of the intended context (where they were posted to cause disturbance or misinformation).
That’s a very interesting suggestion and I’d love to see it done, actually, regardless of what I’m about to write.
The problem is that mods aren’t bot sweepers or disinformation sniffers. They’re just regular people… and there are relatively few of them. They probably have, on average, a better radar than most users, but when it comes to malicious actors they aren’t going to be perfect. More importantly, they have a finite amount of time and effort they can put into moderation. It’s way better to organically crowd-source these kinds of things if it’s possible, and the kind of community Lemmy has makes it possible.
Banning these comments makes the community susceptible to all kinds of manipulation, especially in the run-up to a US election (let alone this one). The benefit of banning these comments is comparatively very minimal: effectively removing one type of ad hominem attack in arguments that have always featured ad hominem attacks, in one form or another.