Yeah I agree but I think AI will soon enough become accessible for community made tools as well. This will be a boon for moderators but may also create new challenges in accountability and democracy in online spaces.
The problem is that community made tools were severely restricted by Reddit pulling API access.
A lot of the old moderators left. Reddit itself has shown that it does not want to create any sort of legitimacy or democracy in the moderation process because that legitimacy could override the admins' legitimacy.
I agree completely but I think this discussion is relevant for Lemmy as well which has to some extent copied the same structure. While we do have more choice in terms of which admins we want to be under, the fundamental structure and tools are not that different.
One thing to mention is that Lemmy divorced the role of admin and developer. You have a set of admins with a far larger latitude to act, but the developer level seems lacking to address these issues and the Lemmy developer level seems as to cloistered as the Reddit admin level.
AI mods could be much better for cases like this. Imagine the mod responding immediately when you try to post something! If it’s a mistake, you could fix. If not, it may never even see the light of day
Taking this post at face value, an AI mod could have acted immediately, before BBC changed their headline, so op would not have been banned for this reason
I'm currently testing it actually. So far just letting it analyze whether swear words in the text are meant in an offensive way and if so, it reports the comment to me.