Skip Navigation
ShareGPT @reddthat.com

Dynamic Trust-Level Architecture with Automatic Moderator Allocation

Proof-of-Concept: automatic, arbitrary-count trust levels with moderator level placement

Below I deliver a clear, implementable proof-of-concept for a moderation/trust-level system that:

  • supports an arbitrary number of trust levels (computed automatically),
  • keeps each level approximately the same size (constant users per level),
  • allows you to declare where moderators sit (e.g., second-highest level),
  • lets you target a moderator density (e.g., about one moderator per 5–10 users),
  • assigns users to levels by trust score (descending order = higher level),
  • and includes practical operational rules (hysteresis, manual locks, small-site adjustments).

I reference Discourse’s trust-levels design as background (for policy ideas and permission structure). (Discourse)


Concepts & parameters

Inputs (configurable):

  • N — number of active users considered for automatic leveling (integer).
  • score(u) — computed trust score for each user u (continuous value). This is the same concept as Discourse’s activity/score that drives TLs; we use it to rank users. (Discourse)
  • mod_density — desired users per moderator (a single target), or a range [min_upm, max_upm] (e.g., [5,10]).
  • mod_level_from_top — which level will hold moderators relative to top (1 = top, 2 = second-highest, etc.). For the user’s example, mod_level_from_top = 2.
  • min_levels — minimum number of levels allowed (safeguard; default 3).
  • reserve_top — number of topmost “staff” slots not auto-assigned (optional).
  • stability_window_days and grace_period_days — for hysteresis to avoid frequent promotion/demotion.

Design goals:

  • Partition the ranked user list into K buckets (trust levels) so that bucket sizes are as equal as possible. The second-highest bucket (index K - mod_level_from_top) becomes the moderator bucket.
  • Choose K so that the moderator bucket size produces an actual moderator count consistent with the requested mod_density (or within the provided range).

High-level algorithm

  1. Compute N = number of active users (filter by active flag/time window).
  2. If user provided range [min_upm, max_upm], choose a target t_upm = round((min_upm + max_upm)/2) (you can choose median or any policy).
  3. Compute target number of moderators: M_target = ceil(N / t_upm) (this is how many moderators we want).
  4. Compute number of levels K so that bucket size ≈ M_target for the moderator bucket: K = max(min_levels, round(N / M_target)) (rounding chosen to keep K integer; you can prefer floor/ceil depending on bias).
  5. Compute equal bucket sizes: integer division and distribute remainder r = N % K to the topmost r buckets (ensures sizes differ by at most 1). Let S[i] be size of bucket i, i = 0..K-1 (0 = highest).
  6. Sort users descending by score(u). Assign the top S[0] users to bucket 0 (highest), next S[1] to bucket 1, etc.
  7. Determine moderator bucket index: mod_idx = K - mod_level_from_top. Users in bucket mod_idx are flagged as moderator candidates (auto-mods). If you want actual moderator privileges granted automatically, grant them; otherwise mark them for review.
  8. Apply operational constraints:
    • Respect manual locks: any user with a manually set trust level is excluded from auto changes.
    • Hysteresis: only change a user’s level if their score crosses the bucket boundary for at least stability_window_days, or apply a grace_period_days after promotion before demotion is allowed.
    • Minimum/maximum counts: if S[mod_idx] is outside the requested [min_upm, max_upm] constraints, adjust K up/down loop until satisfied (see variant below).
  9. Persist new levels and trigger any permission changes.

This produces approximately equal-sized trust levels and places moderators in the requested position.


Pseudocode (readable, implementable)

 python
    
def compute_levels(users, score_fn, min_levels=3, mod_level_from_top=2,
                   min_upm=None, max_upm=None, target_upm=None,
                   reserve_top=0):
    # users: list of user objects active for consideration
    N = len(users)
    if N == 0:
        return []

    # pick target users-per-moderator
    if target_upm:
        t_upm = target_upm
    elif min_upm and max_upm:
        t_upm = (min_upm + max_upm) / 2.0
    elif min_upm:
        t_upm = min_upm
    else:
        t_upm = 10  # default

    M_target = max(1, math.ceil(N / t_upm))   # desired moderator count

    # derive K from target moderators
    K = max(min_levels, max(1, round(N / M_target)))

    # optional: ensure reserve_top aren't auto-assigned
    if reserve_top >= K:
        K = reserve_top + 1

    # compute bucket sizes (top buckets get +1 if remainder)
    base = N // K
    rem = N % K
    sizes = [base + (1 if i < rem else 0) for i in range(K)]  # i=0 highest

    # sort users by score descending
    users_sorted = sorted(users, key=score_fn, reverse=True)

    # assign buckets
    buckets = []
    idx = 0
    for size in sizes:
        buckets.append(users_sorted[idx: idx+size])
        idx += size

    mod_idx = K - mod_level_from_top
    moderators = buckets[mod_idx]

    return {
        "K": K,
        "sizes": sizes,
        "buckets": buckets,
        "moderator_bucket_index": mod_idx,
        "moderators": moderators
    }

  

Complexity: sorting dominates: O(N log N); assignment O(N).


Numeric example

  • N = 100 active users.
  • User asks “one moderator per 5–10 users” → [min_upm, max_upm] = [5, 10]t_upm = 7.5 (median → round to 8).
  • M_target = ceil(100 / 8) = 13 moderators desired.
  • K = round(100 / 13) = 8 levels.
  • Bucket sizes = ceil/floor distribution100 / 8 = 12 remainder 4 → sizes: [13,13,13,13,12,12,12,12] (topmost buckets get +1).
  • If mod_level_from_top = 2, mod_idx = 8 - 2 = 6 (0-based). Bucket 6 has ~12 users ⇒ we’ll have 12 moderators. That yields roughly 100 / 12 ≈ 8.3 users per moderator — inside the 5–10 target.

You can tweak rounding policy to bias up/down if you prefer more/less moderators.


Mapping trust score → level thresholds (percentiles)

Instead of explicitly tagging users by bucket index, you can precompute score thresholds for each bucket so that future new/returning users can be assigned by score without reassigning the whole population every cycle.

  • After sorting, record threshold[i] = score(users_cumulative_index) for bucket boundaries.
  • New user with score(u) is assigned to highest i such that score(u) >= threshold[i].
  • Recompute thresholds periodically (daily/hourly) rather than immediate for each user action.

This is effectively mapping trust scores to percentiles.


Operational considerations, safety and UX

  1. Hysteresis / grace windows: prevent churn by requiring that a user’s score stays on the other side of a boundary for X days before auto-demotion, and enforce a Y day grace after promotion where demotion is suppressed. Discourse uses similar protection for TL3 (you can lose TL3 but there is a grace window). (Discourse)
  2. Manual locks / manual promotions: allow admins to lock certain users to specific levels (e.g., staff, community leaders). Do not override manual locks with automatic assignment. Discourse allows manual promotion to TL4. (Discourse Meta)
  3. Small communities: for small N the system should ensure sensible behavior (e.g., minimum K and not giving moderator powers to too many people). E.g., require N >= threshold before enabling automatic moderator bucket; otherwise require manual appointment.
  4. Permission model: separate “moderator candidate” bucket from actual moderator privileges if you want human review. Alternatively, map bucket → permission template (e.g., bucket 0 = admins, bucket K-2 = moderators, bucket K-1 = regular users, etc.). Discourse maps TLs to permission sets; reuse that pattern. (Discourse)
  5. Transparency: show users how many people are in each level and the approximate requirements, to avoid confusion. Discourse shows a summarized dashboard describing TLs. (Discourse)
  6. Abuse resistance: ensure trust score is robust to gaming (weight long-term activity, diverse actions, anti-spam heuristics).

Variant: strict range enforcement loop

If you require strict enforcement that users per moderator must be within [min_upm, max_upm], loop adjusting K:

 python
    
for K_candidate in range(min_levels, N+1):
    sizes = compute_sizes(N, K_candidate)
    mod_size = sizes[K_candidate - mod_level_from_top]
    users_per_mod = N / mod_size
    if min_upm <= users_per_mod <= max_upm:
         choose this K_candidate (prefer smallest K meeting condition or other policy)

  

This is O(N) loop at most; cheap for typical community sizes.


Implementation pointers

  • Data model: add trust_level integer to user table; optionally trust_score float and auto_assigned boolean.
  • Scheduler: run nightly job to recompute levels and thresholds.
  • API / admin UI: allow admins to preview new K and changes before commit; show counts per bucket and target users/mod values.
  • Rollout: start with read-only “candidate” mode where moderator candidates are listed for staff approval.
  • Metrics: monitor moderator workload (flags handled / replies / moderation actions) per moderator to tune mod_density.

Integration notes for Discourse-style systems

  • Discourse uses 5 fixed trust levels and manual TL4. The proposed system generalizes this to K levels and keeps the spirit of promotion-by-activity. See Discourse’s explanation of TL behavior and manual promotions for reference. (Discourse)
  • Consider using the Discourse concept of lockable trust levels (manual lock) for special roles. (Discourse Meta)

Deliverables I can produce next (pick any)

  • Example production-ready code (Python + SQL) to compute K, thresholds, and update the DB.
  • A small test harness that simulates N users with scores and shows bucket assignments and sensitivity to parameters (includes graphs/tables).
  • A concise admin UI wireframe and API contract for preview/commit of changes.

Tell me which of the three you want and I’ll produce it directly (code + test or UI spec).

0 comments

No comments