Conversation
add exclusion_status enum 'ExclusionStatuses' to LeaderboardEntry and MedalExclusionRecord new settings allow for fine tuned display rules set up to depricate excluded and show_when_excluded fields
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughReplaces boolean exclusion flags with a new enum-based Changes
Sequence Diagram(s)sequenceDiagram
participant User as Client (Browser)
participant FE as Frontend components
participant API as Server (Django views / serializers)
participant Utils as scoring.utils (assign_exclusions_/assign_ranks_)
participant DB as Database (models)
User->>FE: Request leaderboard page
FE->>API: GET /api/leaderboard
API->>DB: Query LeaderboardEntry (filter by exclusion_status)
DB-->>API: Entries with exclusion_status
API->>Utils: process entries (assign_exclusions_ → assign_ranks_/assign_prizes_)
Utils-->>API: processed entries (ranks, prize%, exclusion flags)
API->>FE: JSON response (includes exclusion_status)
FE->>FE: Filter/render rows (hide/show per exclusion_status & user staff)
FE-->>User: Rendered leaderboard
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
🚀 Preview EnvironmentYour preview environment is ready!
Details
ℹ️ Preview Environment InfoIsolation:
Limitations:
Cleanup:
|
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (5)
front_end/src/app/(main)/(leaderboards)/leaderboard/components/project_leaderboard_table/table_row.tsx (1)
42-80:⚠️ Potential issue | 🟠 MajorShow rank for
EXCLUDE_PRIZE_AND_SHOWentries.Status (1) is supposed to take rank, but the current condition hides the rank for any non-INCLUDE entry, so prize-excluded entries lose their rank display.
🐛 Suggested fix
const t = useTranslations(); + const hidesRank = + exclusion_status === ExclusionStatuses.EXCLUDE_AND_SHOW || + exclusion_status === ExclusionStatuses.EXCLUDE_AND_SHOW_IN_ADVANCED || + exclusion_status === ExclusionStatuses.EXCLUDE; const highlight = user?.id === userId || exclusion_status !== ExclusionStatuses.INCLUDE; @@ - {exclusion_status !== ExclusionStatuses.INCLUDE ? ( + {hidesRank ? ( <> <ExcludedEntryTooltip /> </> ) : (front_end/src/app/(main)/(leaderboards)/leaderboard/components/leaderboard_table/table_row.tsx (1)
70-96:⚠️ Potential issue | 🟠 MajorShow rank for
EXCLUDE_PRIZE_AND_SHOWentries.Status (1) should still take rank, but the current condition hides the rank for any non-INCLUDE entry.
🐛 Suggested fix
const t = useTranslations(); + const hidesRank = + exclusion_status === ExclusionStatuses.EXCLUDE_AND_SHOW || + exclusion_status === ExclusionStatuses.EXCLUDE_AND_SHOW_IN_ADVANCED || + exclusion_status === ExclusionStatuses.EXCLUDE; @@ - {exclusion_status != ExclusionStatuses.INCLUDE ? ( + {hidesRank ? ( <> <ExcludedEntryTooltip /> </> ) : (scoring/views.py (1)
250-255:⚠️ Potential issue | 🟡 MinorTotal entries should align with rank‑eligible statuses.
total_entriescurrently counts entries up toEXCLUDE_AND_SHOW_IN_ADVANCED, which includes rows excluded from ranks. This can inflate totals. Consider counting only rank‑eligible statuses.🔧 Suggested fix
- exclusion_status__lte=ExclusionStatuses.EXCLUDE_AND_SHOW_IN_ADVANCED, + exclusion_status__lte=ExclusionStatuses.EXCLUDE_PRIZE_AND_SHOW,scoring/utils.py (1)
860-875:⚠️ Potential issue | 🟡 MinorNormalize
exclusion_statuswhen importing CSV.
row.getreturns strings/empties; passing these directly to an IntegerField can raise or store unexpected values. Consider converting to int with a safe default.🔧 Suggested fix
- exclusion_status = row.get("exclusion_status") + exclusion_status = row.get("exclusion_status") + exclusion_status = ( + int(exclusion_status) + if exclusion_status not in (None, "") + else ExclusionStatuses.INCLUDE + ) ... - exclusion_status=exclusion_status, + exclusion_status=exclusion_status,scoring/management/commands/update_global_bot_leaderboard.py (1)
760-768:⚠️ Potential issue | 🔴 CriticalCommand must update deprecated
excludedfield to maintain consistency withexclusion_status.The command only sets
exclusion_statusbut leavesexcludedandshow_when_excludedat their default values (both False). At runtime,scoring/utils.py:667filters entries byexcluded=Falseto calculate leaderboard statistics. Entries markedexclusion_status=EXCLUDEwill still haveexcluded=False, causing them to be incorrectly included in stats calculations. Either update both the deprecated fields here to match theexclusion_statuslogic, or refactor the statistics query to checkexclusion_statusinstead of the deprecated fields.
🤖 Fix all issues with AI agents
In `@scoring/views.py`:
- Around line 71-78: The rank threshold calculation uses
entries.filter(exclusion_status=ExclusionStatuses.INCLUDE).count(), which omits
entries like ExclusionStatuses.EXCLUDE_PRIZE_AND_SHOW that still consume ranks;
update the count to include all rank-eligible statuses (e.g., replace the
.filter(...) call with a filter that includes both ExclusionStatuses.INCLUDE and
ExclusionStatuses.EXCLUDE_PRIZE_AND_SHOW or the inverse by excluding only
non-rank-eligible statuses), so the max(... * 0.05) uses the correct total;
adjust the expression around rank__lte=max(...) accordingly.
🧹 Nitpick comments (2)
scoring/models.py (1)
345-355: Consider indexingexclusion_statusto replace the oldexcludedindex.If queries now filter on
exclusion_status, the previousdb_indexonexcludedwon’t help and could introduce a perf regression. Adding an index (and migration) keeps filtering fast.♻️ Suggested adjustment
- exclusion_status: ExclusionStatuses = models.IntegerField( - max_length=200, + exclusion_status: ExclusionStatuses = models.IntegerField( + db_index=True, + max_length=200, choices=ExclusionStatuses.choices, default=ExclusionStatuses.INCLUDE, help_text="""This sets the exclusion status of this entry.front_end/src/types/scoring.ts (1)
52-68: PreferExclusionStatusesforexclusion_statustype safety.Now that the enum exists, using it directly prevents invalid values and keeps the type aligned with backend semantics.
♻️ Suggested refactor
- exclusion_status: number; + exclusion_status: ExclusionStatuses;
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (12)
front_end/src/app/(main)/(leaderboards)/leaderboard/components/leaderboard_table/index.tsxfront_end/src/app/(main)/(leaderboards)/leaderboard/components/leaderboard_table/table_row.tsxfront_end/src/app/(main)/(leaderboards)/leaderboard/components/project_leaderboard_table/index.tsxfront_end/src/app/(main)/(leaderboards)/leaderboard/components/project_leaderboard_table/table_row.tsxfront_end/src/types/scoring.tsscoring/admin.pyscoring/management/commands/update_global_bot_leaderboard.pyscoring/migrations/0020_leaderboardentry_and_medalexclusionrecord_exclusion_status.pyscoring/models.pyscoring/serializers.pyscoring/utils.pyscoring/views.py
🧰 Additional context used
🧠 Learnings (2)
📚 Learning: 2026-01-20T16:49:51.584Z
Learnt from: aseckin
Repo: Metaculus/metaculus PR: 4001
File: front_end/src/app/(futureeval)/futureeval/components/futureeval-leaderboard-table.tsx:15-22
Timestamp: 2026-01-20T16:49:51.584Z
Learning: The mockTranslate function in front_end/src/app/(futureeval)/futureeval/components/futureeval-leaderboard-table.tsx is an intentional workaround to satisfy the entryLabel function signature from shared AIB utils while keeping FutureEval translation-free.
Applied to files:
front_end/src/app/(main)/(leaderboards)/leaderboard/components/project_leaderboard_table/table_row.tsxfront_end/src/app/(main)/(leaderboards)/leaderboard/components/leaderboard_table/index.tsxfront_end/src/app/(main)/(leaderboards)/leaderboard/components/project_leaderboard_table/index.tsxfront_end/src/app/(main)/(leaderboards)/leaderboard/components/leaderboard_table/table_row.tsx
📚 Learning: 2026-01-15T19:29:58.940Z
Learnt from: hlbmtc
Repo: Metaculus/metaculus PR: 4075
File: authentication/urls.py:24-26
Timestamp: 2026-01-15T19:29:58.940Z
Learning: In this codebase, DRF is configured to use IsAuthenticated as the default in REST_FRAMEWORK['DEFAULT_PERMISSION_CLASSES'] within metaculus_web/settings.py. Therefore, explicit permission_classes([IsAuthenticated]) decorators are unnecessary on DRF views unless a view needs to override the default. When reviewing Python files, verify that views relying on the default are not redundantly decorated, and flag cases where permissions are being over-specified or when a non-default permission is explicitly required.
Applied to files:
scoring/models.pyscoring/admin.pyscoring/serializers.pyscoring/migrations/0020_leaderboardentry_and_medalexclusionrecord_exclusion_status.pyscoring/management/commands/update_global_bot_leaderboard.pyscoring/utils.pyscoring/views.py
🧬 Code graph analysis (10)
front_end/src/app/(main)/(leaderboards)/leaderboard/components/project_leaderboard_table/table_row.tsx (1)
scoring/models.py (1)
ExclusionStatuses(17-22)
front_end/src/app/(main)/(leaderboards)/leaderboard/components/leaderboard_table/index.tsx (1)
scoring/models.py (1)
ExclusionStatuses(17-22)
scoring/models.py (1)
projects/permissions.py (1)
choices(10-15)
front_end/src/app/(main)/(leaderboards)/leaderboard/components/project_leaderboard_table/index.tsx (1)
scoring/models.py (1)
ExclusionStatuses(17-22)
scoring/migrations/0020_leaderboardentry_and_medalexclusionrecord_exclusion_status.py (1)
scoring/models.py (2)
LeaderboardEntry(314-380)MedalExclusionRecord(410-495)
scoring/management/commands/update_global_bot_leaderboard.py (2)
scoring/models.py (3)
Leaderboard(105-292)LeaderboardEntry(314-380)ExclusionStatuses(17-22)front_end/src/types/scoring.ts (1)
LeaderboardEntry(60-75)
front_end/src/app/(main)/(leaderboards)/leaderboard/components/leaderboard_table/table_row.tsx (1)
scoring/models.py (1)
ExclusionStatuses(17-22)
scoring/utils.py (2)
scoring/models.py (1)
ExclusionStatuses(17-22)projects/models.py (1)
BotLeaderboardStatus(248-252)
front_end/src/types/scoring.ts (1)
scoring/models.py (2)
ExclusionStatuses(17-22)LeaderboardEntry(314-380)
scoring/views.py (2)
scoring/models.py (3)
Leaderboard(105-292)LeaderboardEntry(314-380)ExclusionStatuses(17-22)front_end/src/types/scoring.ts (1)
LeaderboardEntry(60-75)
🪛 Ruff (0.14.14)
scoring/admin.py
[warning] 156-156: Mutable class attributes should be annotated with typing.ClassVar
(RUF012)
scoring/migrations/0020_leaderboardentry_and_medalexclusionrecord_exclusion_status.py
[warning] 7-7: Unused function argument: schema_editor
(ARG001)
[warning] 27-27: Unused function argument: schema_editor
(ARG001)
[warning] 56-58: Mutable class attributes should be annotated with typing.ClassVar
(RUF012)
[warning] 60-104: Mutable class attributes should be annotated with typing.ClassVar
(RUF012)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Build Docker Image / Build Docker Image
🔇 Additional comments (11)
scoring/models.py (3)
17-22: ExclusionStatuses enum looks good.Clear mapping and explicit values make the new status system easy to reason about.
193-196: No review notes for this TODO block.
450-460: LGTM:exclusion_statusadded to MedalExclusionRecord.This aligns with the shared enum and keeps defaults explicit.
scoring/admin.py (1)
156-156: LGTM: admin list_display now surfacesexclusion_status.scoring/serializers.py (1)
15-40: LGTM: serializer now exposesexclusion_statusalongside legacy fields.front_end/src/app/(main)/(leaderboards)/leaderboard/components/project_leaderboard_table/index.tsx (1)
107-123: LGTM: status-based filtering matches the new enum semantics.front_end/src/app/(main)/(leaderboards)/leaderboard/components/leaderboard_table/index.tsx (1)
92-103: LGTM: exclusion-status gating is clear and consistent.scoring/views.py (1)
84-87: Advanced-only entries may be exposed to standard users.
EXCLUDE_AND_SHOW_IN_ADVANCEDis included for all non‑staff. If this endpoint backs the standard leaderboard view, advanced‑only entries will leak. Please confirm the intended audience and gate this filter on an explicit “advanced” flag if needed.scoring/migrations/0020_leaderboardentry_and_medalexclusionrecord_exclusion_status.py (1)
7-24: Migration + backfill mapping looks consistent.Forward and reverse mappings align with the deprecated boolean semantics and provide a clear downgrade path.
Also applies to: 27-51, 60-103
scoring/utils.py (2)
43-44: Exclusion status propagation through ranks/prizes looks consistent.Rank gating, prize pool participation, and prize assignment align with the new enum semantics.
Also applies to: 468-528, 741-742
545-553: Confirm medal eligibility forEXCLUDE_PRIZE_AND_SHOW.Medals are currently limited to
INCLUDE. IfEXCLUDE_PRIZE_AND_SHOWentries should still be medal‑eligible (since they take rank), this will skip them.🔧 Possible adjustment (if medal‑eligible)
- entry_count = len( - [e for e in entries if e.exclusion_status == ExclusionStatuses.INCLUDE] - ) + entry_count = len( + [ + e + for e in entries + if e.exclusion_status <= ExclusionStatuses.EXCLUDE_PRIZE_AND_SHOW + ] + ) ... - if entry.exclusion_status != ExclusionStatuses.INCLUDE: + if entry.exclusion_status > ExclusionStatuses.EXCLUDE_PRIZE_AND_SHOW: continue
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
| rank__lte=max( | ||
| 3, | ||
| np.ceil( | ||
| entries.filter( | ||
| exclusion_status=ExclusionStatuses.INCLUDE | ||
| ).count() | ||
| * 0.05 | ||
| ), |
There was a problem hiding this comment.
Rank threshold should include rank‑eligible exclusions.
EXCLUDE_PRIZE_AND_SHOW entries still take rank, but the count only includes INCLUDE, which can undercount and shrink the top‑% window. Consider counting all rank‑eligible statuses.
🔧 Suggested fix
- entries.filter(
- exclusion_status=ExclusionStatuses.INCLUDE
- ).count()
+ entries.filter(
+ exclusion_status__lte=ExclusionStatuses.EXCLUDE_PRIZE_AND_SHOW
+ ).count()🤖 Prompt for AI Agents
In `@scoring/views.py` around lines 71 - 78, The rank threshold calculation uses
entries.filter(exclusion_status=ExclusionStatuses.INCLUDE).count(), which omits
entries like ExclusionStatuses.EXCLUDE_PRIZE_AND_SHOW that still consume ranks;
update the count to include all rank-eligible statuses (e.g., replace the
.filter(...) call with a filter that includes both ExclusionStatuses.INCLUDE and
ExclusionStatuses.EXCLUDE_PRIZE_AND_SHOW or the inverse by excluding only
non-rank-eligible statuses), so the max(... * 0.05) uses the correct total;
adjust the expression around rank__lte=max(...) accordingly.
front_end/src/app/(main)/(leaderboards)/leaderboard/components/leaderboard_table/index.tsx
Outdated
Show resolved
Hide resolved
…ntry-exclusion-subtlety
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In
`@front_end/src/app/`(main)/(leaderboards)/leaderboard/components/project_leaderboard_table/index.tsx:
- Around line 152-171: The map callback in leaderboardEntries.map currently
re-checks exclusion_status and returns nothing when the condition fails,
producing undefined slots; remove the inner conditional entirely and always
return the TableRow for each mapped entry (keep the same props: key using
entry.user?.id ?? entry.aggregation_method, rowEntry={entry}, userId,
maxCoverage, withPrizePool, isAdvanced). Rely on the earlier filteredEntries
logic (the filtering by ExclusionStatuses done where filteredEntries is
computed) so you only map over already-filtered entries and avoid implicit
undefineds from the map.
In `@scoring/management/commands/update_global_bot_leaderboard.py`:
- Around line 936-940: The code in update_global_bot_leaderboard.py currently
does direct dict access user.metadata["bot_details"] which can raise KeyError or
TypeError if metadata is None or missing; change the check in the block that
handles uid ints to safely fetch bot_details via
user.metadata.get("bot_details") (or default to {}), then check
bot_details.get("display_in_leaderboard") and set excluded accordingly; ensure
you also guard for user.metadata being None before calling .get and prefer a
safe local variable (e.g., bot_details = (user.metadata or
{}).get("bot_details", {})) so the logic around excluded remains correct.
🧹 Nitpick comments (2)
front_end/src/types/scoring.ts (1)
69-69: Use theExclusionStatusesenum type instead ofnumber.The
ExclusionStatusesenum is defined in this file butexclusion_statusis typed asnumber, losing the type safety the enum provides. Comparisons in downstream components (e.g.,entry.exclusion_status <= ExclusionStatuses.EXCLUDE_AND_SHOW) would benefit from the stricter typing.Proposed fix
- exclusion_status: number; + exclusion_status: ExclusionStatuses;front_end/src/app/(main)/(leaderboards)/leaderboard/components/project_leaderboard_table/index.tsx (1)
59-63: Use strict equality (===) instead of==.Lines 60 and 62: the comparison with
ExclusionStatusesvalues should use===for idiomatic TypeScript.Proposed fix
(entry) => - entry.exclusion_status <= ExclusionStatuses.EXCLUDE_AND_SHOW || + entry.exclusion_status <= ExclusionStatuses.EXCLUDE_AND_SHOW || (isAdvanced && - entry.exclusion_status == + entry.exclusion_status === ExclusionStatuses.EXCLUDE_AND_SHOW_IN_ADVANCED)
...end/src/app/(main)/(leaderboards)/leaderboard/components/project_leaderboard_table/index.tsx
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Actionable comments posted: 10
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
scoring/utils.py (1)
674-682:⚠️ Potential issue | 🟠 Major
calculate_medals_points_at_timestill filters on the deprecatedexcludedfield.Line 676 uses
.filter(excluded=False), but the new pipeline (process_entries_for_leaderboard_) no longer populatesexcluded— newly created entries will always haveexcluded=False(the model default). This means entries that should be excluded (viaexclusion_status >= EXCLUDE_AND_SHOW) will incorrectly pass through this filter and inflatetotal_participants, affecting medal point calculations.This should be updated to use
exclusion_status:📝 Proposed fix
totals = ( - relevant_entries_qs.filter(excluded=False) + relevant_entries_qs.filter(exclusion_status=ExclusionStatuses.INCLUDE) .annotate( points_type=points_type_expr, )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scoring/utils.py` around lines 674 - 682, The medal calculation in calculate_medals_points_at_time is still filtering on the deprecated excluded field (relevant_entries_qs.filter(excluded=False)), which lets entries with exclusion_status >= EXCLUDE_AND_SHOW leak through; update the filter to use exclusion_status (e.g. relevant_entries_qs.filter(exclusion_status__lt=EXCLUDE_AND_SHOW)) so totals is computed only from non-excluded entries, referencing the relevant_entries_qs, totals variable, and the EXCLUDE_AND_SHOW constant.
🧹 Nitpick comments (7)
front_end/src/app/(main)/(leaderboards)/leaderboard/components/project_leaderboard_table/index.tsx (1)
57-65: Use strict equality (===) instead of loose equality (==).Line 62 uses
==for the comparison. TypeScript best practice favors===to avoid implicit type coercion, even though both operands are numbers here.Proposed fix
return leaderboardDetails.entries.filter( (entry) => entry.exclusion_status <= ExclusionStatuses.EXCLUDE_AND_SHOW || (isAdvanced && - entry.exclusion_status == + entry.exclusion_status === ExclusionStatuses.EXCLUDE_AND_SHOW_IN_ADVANCED) );🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@front_end/src/app/`(main)/(leaderboards)/leaderboard/components/project_leaderboard_table/index.tsx around lines 57 - 65, In filteredEntries (the useMemo selector) replace the loose equality check on entry.exclusion_status with a strict equality: change the comparison inside the isAdvanced branch that currently uses entry.exclusion_status == ExclusionStatuses.EXCLUDE_AND_SHOW_IN_ADVANCED to use === instead; ensure the other comparisons (entry.exclusion_status <= ExclusionStatuses.EXCLUDE_AND_SHOW) remain unchanged and keep dependencies [leaderboardDetails.entries, isAdvanced].front_end/src/app/(main)/(leaderboards)/leaderboard/components/project_leaderboard_table/table_row.tsx (1)
42-44: Thehighlightflag conflates two different signals: "this is your row" and "this entry is excluded".Both the current user's row and excluded-but-visible entries get the same orange background. This could confuse users into thinking their entry is excluded. Consider distinguishing these visually, or at least note this as intentional.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@front_end/src/app/`(main)/(leaderboards)/leaderboard/components/project_leaderboard_table/table_row.tsx around lines 42 - 44, The current boolean highlight conflates two different signals (current user and excluded entries) because highlight is set from user?.id === userId || exclusion_status > ExclusionStatuses.EXCLUDE_PRIZE_ONLY; split it into two explicit flags (e.g., isCurrentUser = user?.id === userId and isExcludedVisible = exclusion_status > ExclusionStatuses.EXCLUDE_PRIZE_ONLY) in the table_row component and update the render to apply distinct classes/styles or visual cues for each state (different background colors or badges) instead of using the single highlight flag so users won't mistake their row for an excluded entry.front_end/src/types/scoring.ts (1)
69-69: Typeexclusion_statusasExclusionStatusesinstead ofnumberfor stronger type safety.The
ExclusionStatusesenum is defined just above and used in comparisons throughout the frontend. Typing this field asnumberloses the semantic guarantee that the value is a valid exclusion status.Proposed fix
- exclusion_status: number; + exclusion_status: ExclusionStatuses;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@front_end/src/types/scoring.ts` at line 69, The field declaration for exclusion_status in the scoring types should use the ExclusionStatuses enum instead of number to enforce valid status values; update the type of exclusion_status to ExclusionStatuses in the scoring type/interface (referencing ExclusionStatuses and exclusion_status) and adjust any callers or deserializers that assume a raw number (e.g., remove/replace numeric casts or add explicit mapping/validation when converting from external data) so the code compiles and consumers use the enum type safely.tests/unit/test_scoring/test_utils.py (1)
26-30: Consider addingEXCLUDE_PRIZE_ONLYtest coverage.The current parametrized tests don't exercise
EXCLUDE_PRIZE_ONLY(status 1), which has unique semantics — it takes rank but not prize. This is the only status where rank increments but prize is excluded. A test case verifying that a user withEXCLUDE_PRIZE_ONLYadvances the rank counter but receivespercent_prize == 0would strengthen confidence in the boundary between statuses 0 and 1.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/test_scoring/test_utils.py` around lines 26 - 30, Add a parametrized test case to exercise the EXCLUDE_PRIZE_ONLY status (value 1) by creating a LeaderboardEntry with status=EXCLUDE_PRIZE_ONLY and a nonzero take, call assign_prize_percentages_(entries, minimum_prize_percent), and assert that the entry increments rank position but has entry.percent_prize == 0; reference LeaderboardEntry, EXCLUDE_PRIZE_ONLY, assign_prize_percentages_, and percent_prize to locate where to add the new case alongside the existing test_prize_percentages parametrizations.scoring/utils.py (3)
469-495: Three separate queries oncandidatescould be consolidated.
assign_exclusions_issues three DB queries against the samecandidatesqueryset (lines 483, 488, 494). For large leaderboards, fetching once and partitioning in Python would be more efficient:📝 Proposed refactor
+ all_candidates = list(candidates) + humans = [u for u in all_candidates if not u.is_bot] + bots = [u for u in all_candidates if u.is_bot] + non_primary_bots = [u for u in bots if not u.is_primary_bot] + - for human in candidates.filter(is_bot=False): + for human in humans: exclusion_statuses[human.id] = max( exclusion_statuses[human.id], leaderboard.human_exclusion_status, ) - for bot in candidates.filter(is_bot=True): + for bot in bots: exclusion_statuses[bot.id] = max( exclusion_statuses[bot.id], leaderboard.bot_exclusion_status, ) # all non-primary bots are unconditionally excluded - for user in candidates.filter(is_bot=True, is_primary_bot=False): + for user in non_primary_bots: exclusion_statuses[user.id] = ExclusionStatuses.EXCLUDE🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scoring/utils.py` around lines 469 - 495, assign_exclusions_ issues three DB hits by calling candidates.filter(...) repeatedly; instead evaluate candidates once (e.g., list(candidates) or candidates.select_related(...) as needed) and partition in Python, then update exclusion_statuses from those in-memory groups; locate the queryset named candidates and the three loops that call candidates.filter(is_bot=False), candidates.filter(is_bot=True), and candidates.filter(is_bot=True, is_primary_bot=False) and replace them with a single fetch (e.g., all_users = list(candidates)) and iterate that list updating exclusion_statuses using user.is_bot and user.is_primary_bot to apply leaderboard.human_exclusion_status, leaderboard.bot_exclusion_status, and the unconditional ExclusionStatuses.EXCLUDE for non-primary bots.
43-43:ExclusionStatusesimported fromscoring.modelsinstead of its canonical sourcescoring.constants.The enum is defined in
scoring.constantsand only re-exported byscoring.models. The test file (tests/unit/test_scoring/test_utils.pyline 4) correctly imports fromscoring.constants. Importing from the canonical module is clearer and avoids potential circular-import issues as the codebase evolves.📝 Proposed fix
-from scoring.constants import ScoreTypes, LeaderboardScoreTypes +from scoring.constants import ScoreTypes, LeaderboardScoreTypes, ExclusionStatuses from scoring.models import ( ArchivedScore, - ExclusionStatuses, Score, LeaderboardEntry, Leaderboard, MedalExclusionRecord, LeaderboardsRanksEntry, )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scoring/utils.py` at line 43, Change the import in scoring.utils to pull ExclusionStatuses from its canonical module: replace the re-export import of ExclusionStatuses from scoring.models with a direct import from scoring.constants (i.e., import ExclusionStatuses from scoring.constants); update any import lines referencing ExclusionStatuses in scoring.utils to use the new source to avoid potential circular imports and make the dependency explicit.
862-876: CSV-importedexclusion_statusis a raw string — no validation againstExclusionStatuses.
row.get("exclusion_status")yields a string (e.g."0","4", or potentially"invalid"). Django'sIntegerFieldwill coerce valid numeric strings duringbulk_create, but there's no validation that the value is within theExclusionStatusesrange. An out-of-range or non-numeric value would either silently store garbage or raise an opaque DB error.Consider adding a coercion/validation step:
📝 Proposed fix
- exclusion_status = row.get("exclusion_status") + exclusion_status = int(row.get("exclusion_status", ExclusionStatuses.INCLUDE))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scoring/utils.py` around lines 862 - 876, The CSV-imported exclusion_status is currently taken directly from row.get("exclusion_status") and passed into the LeaderboardEntry constructor; coerce it to an int and validate it against the ExclusionStatuses set (or enum) before constructing LeaderboardEntry to avoid invalid or out-of-range values. In the block that builds new_entry (where LeaderboardEntry is instantiated), parse exclusion_status with a safe conversion (try/except), check membership like `value in {s.value for s in ExclusionStatuses}` (or use ExclusionStatuses(value) in a try), and if invalid either set a safe default (e.g., ExclusionStatuses.NONE) or skip/log the row and continue; ensure the validated int (or default) is what's passed to LeaderboardEntry.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@scoring/management/commands/update_coherence_tournament_leaderboard.py`:
- Around line 238-239: The rank increment logic is inverted: change the
condition around exclusion_status so rank is incremented for rank-eligible
entries (INCLUDE and EXCLUDE_PRIZE_ONLY) instead of fully-excluded ones;
specifically replace the current negated check on exclusion_status (the line
referencing exclusion_status and ExclusionStatuses.EXCLUDE_PRIZE_ONLY) with a
positive check that evaluates true for eligible statuses and then do rank += 1.
- Around line 229-232: The code currently sets entry.rank = rank
unconditionally; change it so that when exclusion_status is EXCLUDE_AND_SHOW or
any status that should not be ranked (per ExclusionStatuses enum) the rank is
set to None instead of a numeric value. Update the block around entry.rank,
entry.excluded, entry.show_when_excluded, and entry.exclusion_status to compute
a boolean like is_excluded = exclusion_status != ExclusionStatuses.INCLUDE (or
check the specific enum members that denote exclusion) and assign entry.rank =
None if is_excluded else rank, while still setting entry.excluded,
entry.show_when_excluded, and entry.exclusion_status as before.
- Around line 221-222: The loop currently queries
MedalExclusionRecord.objects.filter(user_id=uid or 0) which pulls exclusions
across all projects/leaderboards; change the query to scope by the current
leaderboard/project and universal exclusions like the pattern in
scoring/utils.py lines 440-444: import Q from django.db.models and filter using
((Q(project__isnull=True) & Q(leaderboard__isnull=True)) |
Q(leaderboard=leaderboard) | Q(project=leaderboard.project)) combined with the
user_id clause so that the loop over exclusion records (used to compute
exclusion_status) only considers relevant or global exclusions for the current
leaderboard.
In `@scoring/migrations/0021_remove_leaderboard_bot_status_and_more.py`:
- Around line 40-52: The reverse_migrate block that updates LeaderboardEntry
(function reverse_migrate) is lossy for rows with exclusion_status == 1
(EXCLUDE_PRIZE_ONLY) because the boolean fields excluded and show_when_excluded
cannot represent that state; add a concise comment above this update explaining
that exclusion_status==1 will be mapped to excluded=False and
show_when_excluded=False on rollback, that this is an inherent data-loss
limitation of the old boolean schema, and mention the relevant fields and enum
(exclusion_status, excluded, show_when_excluded, EXCLUDE_PRIZE_ONLY) so future
maintainers understand why the mapping is lossy and not reversible.
- Around line 14-37: In the migrate function add a data migration for
MedalExclusionRecord so its new exclusion_status reflects the old show_anyway
flag: fetch MedalExclusionRecord via apps.get_model("scoring",
"MedalExclusionRecord") and run an update that sets exclusion_status to the
EXCLUDE_AND_SHOW value (2) when show_anyway is True and to EXCLUDE (4) otherwise
(use a Case/When with Value(2) and default Value(4)); place this after the
Leaderboard/LeaderboardEntry updates and reference the MedalExclusionRecord
model name in your change.
In `@scoring/models.py`:
- Around line 378-387: The help_text for the exclusion_status IntegerField uses
a misleading label "(1) Exclude Prize and Show" that does not match the enum
member ExclusionStatuses.EXCLUDE_PRIZE_ONLY; update the label in the help_text
to "(1) Exclude Prize Only" (or otherwise match the exact enum name/intent) so
it correctly reflects that entries are shown and ranked but excluded from
prizes; ensure the rest of the description remains unchanged and reference the
exclusion_status field and ExclusionStatuses enum when making the change.
- Around line 482-491: The help_text for the IntegerField exclusion_status (in
the MedalExclusionRecord/related model) has a mismatched label for status "(1)";
update the string in the exclusion_status help_text to change "(1) Exclude Prize
and Show" to "(1) Exclude Prize Only" so it matches the ExclusionStatuses enum
value EXCLUDE_PRIZE_ONLY and keep the rest of the help_text unchanged.
In `@scoring/utils.py`:
- Around line 435-438: The annotated return types for in-place mutating
functions are incorrect: change the return annotation from ->
list[LeaderboardEntry] to -> None for assign_exclusions_, assign_ranks_,
assign_prize_percentages_, assign_medals_, and assign_prizes_ (they mutate the
leaderboard/entries in-place and do not return a value), and ensure these
functions do not include any implicit/explicit return of a list so type checkers
reflect the real behavior.
In `@scoring/views.py`:
- Around line 157-160: The code sets user = None for anonymous requests but
still uses user.id in the Q on entries, causing an AttributeError, and the
filter uses exclusion_status__lte=ExclusionStatuses.EXCLUDE which (given EXCLUDE
value) matches everything; fix by: compute the exclusion Q using a strict
comparison (e.g., exclusion_status__lt=ExclusionStatuses.EXCLUDE or equality
depending on intended semantics) instead of __lte, and only include the
Q(user_id=user.id) branch when request.user.is_authenticated — for example build
the query conditionally: if request.user.is_authenticated add
Q(user_id=request.user.id) to the OR, otherwise filter only by the exclusion Q;
update the code referencing request.user, entries, and ExclusionStatuses
accordingly.
- Around line 88-89: global_leaderboard_view currently builds a
Q(user_id=user.id) clause while decorated with `@permission_classes`([AllowAny]),
which causes AnonymousUser.id (None) to be used and may match rows with user_id
IS NULL; update the view to only add the Q(user_id=...) filter when
request.user.is_authenticated (or when getattr(request.user, "id", None) is not
None) so anonymous requests do not inject user_id=None into the query, and keep
behavior unchanged for authenticated users.
---
Outside diff comments:
In `@scoring/utils.py`:
- Around line 674-682: The medal calculation in calculate_medals_points_at_time
is still filtering on the deprecated excluded field
(relevant_entries_qs.filter(excluded=False)), which lets entries with
exclusion_status >= EXCLUDE_AND_SHOW leak through; update the filter to use
exclusion_status (e.g.
relevant_entries_qs.filter(exclusion_status__lt=EXCLUDE_AND_SHOW)) so totals is
computed only from non-excluded entries, referencing the relevant_entries_qs,
totals variable, and the EXCLUDE_AND_SHOW constant.
---
Duplicate comments:
In `@scoring/models.py`:
- Around line 207-226: Fix the typo in the class-level TODO comment that
references the legacy field: change "depricate" to "deprecate" (this comment is
paired with the bot_status field) and keep the rest of the note referencing
bot_exclusion_status and human_exclusion_status intact; update the text around
the bot_status declaration so it reads "TO BE DEPRECATED IN FAVOR OF
bot_exclusion_status".
In `@scoring/views.py`:
- Around line 78-86: The rank threshold calculation wrongly counts only
entries.filter(exclusion_status=ExclusionStatuses.INCLUDE) which omits
EXCLUDE_PRIZE_ONLY entries that still occupy rank; change the filter used to
count the ranked population to include both ExclusionStatuses.INCLUDE and
ExclusionStatuses.EXCLUDE_PRIZE_ONLY (e.g.
entries.filter(exclusion_status__in=[ExclusionStatuses.INCLUDE,
ExclusionStatuses.EXCLUDE_PRIZE_ONLY]) or the equivalent logic) so the
np.ceil(... * 0.05) uses the correct total of entries that take rank for
computing rank__lte.
---
Nitpick comments:
In
`@front_end/src/app/`(main)/(leaderboards)/leaderboard/components/project_leaderboard_table/index.tsx:
- Around line 57-65: In filteredEntries (the useMemo selector) replace the loose
equality check on entry.exclusion_status with a strict equality: change the
comparison inside the isAdvanced branch that currently uses
entry.exclusion_status == ExclusionStatuses.EXCLUDE_AND_SHOW_IN_ADVANCED to use
=== instead; ensure the other comparisons (entry.exclusion_status <=
ExclusionStatuses.EXCLUDE_AND_SHOW) remain unchanged and keep dependencies
[leaderboardDetails.entries, isAdvanced].
In
`@front_end/src/app/`(main)/(leaderboards)/leaderboard/components/project_leaderboard_table/table_row.tsx:
- Around line 42-44: The current boolean highlight conflates two different
signals (current user and excluded entries) because highlight is set from
user?.id === userId || exclusion_status > ExclusionStatuses.EXCLUDE_PRIZE_ONLY;
split it into two explicit flags (e.g., isCurrentUser = user?.id === userId and
isExcludedVisible = exclusion_status > ExclusionStatuses.EXCLUDE_PRIZE_ONLY) in
the table_row component and update the render to apply distinct classes/styles
or visual cues for each state (different background colors or badges) instead of
using the single highlight flag so users won't mistake their row for an excluded
entry.
In `@front_end/src/types/scoring.ts`:
- Line 69: The field declaration for exclusion_status in the scoring types
should use the ExclusionStatuses enum instead of number to enforce valid status
values; update the type of exclusion_status to ExclusionStatuses in the scoring
type/interface (referencing ExclusionStatuses and exclusion_status) and adjust
any callers or deserializers that assume a raw number (e.g., remove/replace
numeric casts or add explicit mapping/validation when converting from external
data) so the code compiles and consumers use the enum type safely.
In `@scoring/utils.py`:
- Around line 469-495: assign_exclusions_ issues three DB hits by calling
candidates.filter(...) repeatedly; instead evaluate candidates once (e.g.,
list(candidates) or candidates.select_related(...) as needed) and partition in
Python, then update exclusion_statuses from those in-memory groups; locate the
queryset named candidates and the three loops that call
candidates.filter(is_bot=False), candidates.filter(is_bot=True), and
candidates.filter(is_bot=True, is_primary_bot=False) and replace them with a
single fetch (e.g., all_users = list(candidates)) and iterate that list updating
exclusion_statuses using user.is_bot and user.is_primary_bot to apply
leaderboard.human_exclusion_status, leaderboard.bot_exclusion_status, and the
unconditional ExclusionStatuses.EXCLUDE for non-primary bots.
- Line 43: Change the import in scoring.utils to pull ExclusionStatuses from its
canonical module: replace the re-export import of ExclusionStatuses from
scoring.models with a direct import from scoring.constants (i.e., import
ExclusionStatuses from scoring.constants); update any import lines referencing
ExclusionStatuses in scoring.utils to use the new source to avoid potential
circular imports and make the dependency explicit.
- Around line 862-876: The CSV-imported exclusion_status is currently taken
directly from row.get("exclusion_status") and passed into the LeaderboardEntry
constructor; coerce it to an int and validate it against the ExclusionStatuses
set (or enum) before constructing LeaderboardEntry to avoid invalid or
out-of-range values. In the block that builds new_entry (where LeaderboardEntry
is instantiated), parse exclusion_status with a safe conversion (try/except),
check membership like `value in {s.value for s in ExclusionStatuses}` (or use
ExclusionStatuses(value) in a try), and if invalid either set a safe default
(e.g., ExclusionStatuses.NONE) or skip/log the row and continue; ensure the
validated int (or default) is what's passed to LeaderboardEntry.
In `@tests/unit/test_scoring/test_utils.py`:
- Around line 26-30: Add a parametrized test case to exercise the
EXCLUDE_PRIZE_ONLY status (value 1) by creating a LeaderboardEntry with
status=EXCLUDE_PRIZE_ONLY and a nonzero take, call
assign_prize_percentages_(entries, minimum_prize_percent), and assert that the
entry increments rank position but has entry.percent_prize == 0; reference
LeaderboardEntry, EXCLUDE_PRIZE_ONLY, assign_prize_percentages_, and
percent_prize to locate where to add the new case alongside the existing
test_prize_percentages parametrizations.
scoring/management/commands/update_coherence_tournament_leaderboard.py
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
scoring/utils.py (2)
860-875:⚠️ Potential issue | 🟡 Minor
exclusion_statusfrom CSV is not type-converted andNonebreaks on non-nullable field.
row.get("exclusion_status")returnsNonefor any CSV that predates this field (backward-compat breakage). PassingNonedirectly to aNOT NULLIntegerFieldinbulk_createwill raise a DB-level error. Additionally, the raw string value (e.g.,"2") is never coerced toint, which may also fail depending on the DB adapter path taken bybulk_create.🐛 Proposed fix
- exclusion_status = row.get("exclusion_status") + _es_raw = row.get("exclusion_status") + exclusion_status = int(_es_raw) if _es_raw is not None else ExclusionStatuses.INCLUDE🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scoring/utils.py` around lines 860 - 875, row.get("exclusion_status") can be None or a string and must be normalized before constructing LeaderboardEntry to avoid inserting NULL into the NOT NULL IntegerField; convert the value to an int (e.g., int(str_val)) after handling empty/None and non-numeric cases and provide a safe default (such as 0 or another domain-appropriate sentinel) and use that normalized integer when building new_entry (refer to the exclusion_status variable and the LeaderboardEntry constructor in this diff).
668-676:⚠️ Potential issue | 🟠 Major
calculate_medals_points_at_timeat line 676 must be updated to useexclusion_statusfield.The
totalsqueryset filters withfilter(excluded=False), which relies on the deprecated boolean field. Since the PR migrates toexclusion_status(confirmed active in migrations, views, and utils throughout the codebase), this must change tofilter(exclusion_status=ExclusionStatuses.INCLUDE)to avoid silently incorrect participant counts in medal points calculations and rank percentile updates.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scoring/utils.py` around lines 668 - 676, In calculate_medals_points_at_time replace the deprecated filter(excluded=False) on relevant_entries_qs used to build totals with filter(exclusion_status=ExclusionStatuses.INCLUDE) so totals only counts included participants; ensure ExclusionStatuses is imported/available in the module and update the totals queryset definition (the variable totals derived from relevant_entries_qs) accordingly to use exclusion_status=ExclusionStatuses.INCLUDE.
🧹 Nitpick comments (1)
scoring/admin.py (1)
203-214: Suppress Ruff RUF012 by annotating mutable class attributes withClassVar.Ruff flags
inlines,readonly_fields, andactions(andlist_displayat line 450) as mutable class-attribute defaults. Annotating withClassVaris the idiomatic fix.♻️ Proposed fix
+from typing import ClassVar ... class LeaderboardAdmin(admin.ModelAdmin): ... - inlines = [MedalExclusionRecordInline] - readonly_fields = [ + inlines: ClassVar[list] = [MedalExclusionRecordInline] + readonly_fields: ClassVar[list] = [ "entries_list_link", "leaderboard_action_buttons", "entries_preview", ] ... - actions = [ + actions: ClassVar[list] = [ "make_primary_leaderboard", "update_leaderboards", "force_update_leaderboards", "force_finalize_and_assign_medals_leaderboards", ]Apply the same annotation to
list_displayat line 450.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scoring/admin.py` around lines 203 - 214, Annotate the mutable admin class attributes with typing.ClassVar to suppress Ruff RUF012: import ClassVar from typing, then change the declarations for inlines, readonly_fields, actions (and the list_display declaration noted later) to be typed as ClassVar[list[...]] (or ClassVar[tuple[...]]/ClassVar[Sequence[...]] as appropriate) instead of plain assignment so the attributes are explicit class-level constants (e.g., inlines: ClassVar[list] = [...], readonly_fields: ClassVar[list[str]] = [...], actions: ClassVar[list[str]] = [...], and list_display: ClassVar[list[str]] = [...]). Ensure you update the admin class where these symbols (inlines, readonly_fields, actions, list_display) are defined and add the import for ClassVar at the top of the module.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@scoring/utils.py`:
- Around line 747-752: assign_prizes_ currently calls float(prize_pool)
unguarded which raises TypeError when prize_pool is None; update assign_prizes_
to treat a missing prize_pool as zero (e.g., coerce prize_pool to 0 or
Decimal(0) before computing) and use that value when calculating entry.prize
(the assignment inside assign_prizes_ that multiplies entry.percent_prize).
Ensure the same falsy-guard behavior used for minimum_prize_percent (if
prize_pool else 0) is applied here so leaderboards without a prize_pool don't
crash.
---
Outside diff comments:
In `@scoring/utils.py`:
- Around line 860-875: row.get("exclusion_status") can be None or a string and
must be normalized before constructing LeaderboardEntry to avoid inserting NULL
into the NOT NULL IntegerField; convert the value to an int (e.g., int(str_val))
after handling empty/None and non-numeric cases and provide a safe default (such
as 0 or another domain-appropriate sentinel) and use that normalized integer
when building new_entry (refer to the exclusion_status variable and the
LeaderboardEntry constructor in this diff).
- Around line 668-676: In calculate_medals_points_at_time replace the deprecated
filter(excluded=False) on relevant_entries_qs used to build totals with
filter(exclusion_status=ExclusionStatuses.INCLUDE) so totals only counts
included participants; ensure ExclusionStatuses is imported/available in the
module and update the totals queryset definition (the variable totals derived
from relevant_entries_qs) accordingly to use
exclusion_status=ExclusionStatuses.INCLUDE.
---
Duplicate comments:
In `@scoring/utils.py`:
- Around line 435-438: The listed functions (assign_exclusions_, assign_ranks_,
assign_prize_percentages_, assign_medals_, assign_prizes_) are annotated as
returning list[LeaderboardEntry] but perform in-place mutations and do not
return anything; change their return type annotations to -> None (or remove the
return annotation) so the signature accurately reflects they return nothing, and
update any related docstrings or callers that might expect a returned list to
avoid confusion.
---
Nitpick comments:
In `@scoring/admin.py`:
- Around line 203-214: Annotate the mutable admin class attributes with
typing.ClassVar to suppress Ruff RUF012: import ClassVar from typing, then
change the declarations for inlines, readonly_fields, actions (and the
list_display declaration noted later) to be typed as ClassVar[list[...]] (or
ClassVar[tuple[...]]/ClassVar[Sequence[...]] as appropriate) instead of plain
assignment so the attributes are explicit class-level constants (e.g., inlines:
ClassVar[list] = [...], readonly_fields: ClassVar[list[str]] = [...], actions:
ClassVar[list[str]] = [...], and list_display: ClassVar[list[str]] = [...]).
Ensure you update the admin class where these symbols (inlines, readonly_fields,
actions, list_display) are defined and add the import for ClassVar at the top of
the module.
… by score, misc comment improvements
new settings allow for fine tuned display rules for leaderboard entries
add exclusion_status enum
ExclusionStatusestoLeaderboardEntryandMedalExclusionRecordset up to deprecate excluded and show_when_excluded fields
Pics:
Here are some new buttons at the top of Admin:

This is the new way to view leaderboard entries and medal exclusions on Admin:

(usernames redacted, note that exclusion records are given to the top 5 users in order - redaction color matches)
This is the result on the relevant leaderboard: (note lack of prize for non-excluded entry 2 which corresponds to the exclusion record set at "Exclude Prize Only")

Summary by CodeRabbit