Skip Navigation

If power corrupts, or power attracts the corrupt, why do we have moderators?

71

You're viewing a single thread.

71 comments
  • I don't think that the type of power that a janny has is able to meaningfully corrupt the janny. At least, not in most cases; because it's practically no power, like it or not your online community means nothing in the big picture.

    Instead, I think that bad moderators are the result of people with specific moral flaws (entitlement, assumptiveness, irrationality, lack of self-control, context illiteracy) simply showing them as they interact with other people. They'd do it without the janny position, it's just that being a janny increases the harm that those trashy users cause.

    Why the alternatives that you mentioned to human moderation do not work:

    • Bots - content moderation requires understanding what humans convey through language and/or images within a context. Bots do not.
    • Voting - voting only works when you have crystal clear rules on who's allowed or not to vote, otherwise the community will be subjected to external meddling.
    • Bots - content moderation requires understanding what humans convey through language and/or images within a context. Bots do not.

      so, like. bots are programed by people. all they really do is put a buffer between the actions of a moderator and the (real) moderators.

      • The origin (being programmed by people) doesn't matter, what matters are the capabilities. Not even current state-of-art LLMs understand human language on a discursive level, and yet that is necessary if you want to moderate the content produced by human beings.

        (inb4: a few people don't understand it either. Those should not be moderators.)

        all they really do is put a buffer between the actions of a moderator [user? otherwise the sentence doesn't make sense] and the (real) moderators.

        Using them as a buffer would be fine, but sometimes bots are used to replace the actions of human moderators - this is a shitty practice bound to create a lot of false positives (legit content and users being removed) and false negatives (shitty users and content are left alone). Reddit is a good example of that - there's always some fuckhead mod to code automod to remove posts based on individual keywords, and never check the mod logs for false positives.

        • even if that hypothetical AI could understand human language- and you're right- it's coded by people, and it's actions will be predicated on what those people coded it to do.

          Meaning that the AI gets it's sense of appropriate from those people. Which means, those people might as well be modding it. or seen as the mods. bots are all-too-frequently used to insulate the people making the decisions as to what should be moderated from those actions. in the case of reddit automod bot yeeting content based on included words... most of that is stupid, I agree, but then it's those mod's community.

          • Now I got your point. You're right - the AI in question will inherit the biases and the worldviews of the people coding it, effectively acting as their proxy. IMO for this reason the bot's actions should be seen as moral responsibility of those people (i.e. instead of "the bot did it", it's more like "I did it through the bot").

            in the case of reddit automod bot yeeting content based on included words… most of that is stupid, I agree, but then it’s those mod’s community.

            Even if we see the comm as belonging to the mod, it's still a shitty approach that IMO should be avoided, for the sake of the health of the community. You don't want people breaking the rules by avoiding the automod (it's too easy to do it), but you also don't want content being needlessly removed.

            Plus, personally, I don't see a community as "the mod's". It's more like "the users' ". The mods are there enforcing the rules, sure, but the community belongs as much to them as it belongs to the others, you know?

71 comments