Skip to content

Rating and priority

Three ratings drive task ordering. They describe current-state quality, not work effort. Bad โ†’ do first; medium โ†’ do next; good โ†’ keep.
๐ŸŒณ evergreen tended 2026-05-08 convention tasks priority
flowchart LR
  bad[bad: broken / urgent] --> first[do first]
  med[medium: works, could improve] --> next[do next]
  good[good: working] --> keep[keep ยท revisit later]
Read next
  • Schema โ€” the field types behind ratings
  • TODO โ€” the rated, ordered task list
  • Epistemic status โ€” the cousin badge for pages
  • Patterns โ€” where this convention lives

Quality-of-state, not effort. A 'bad' task can be quick; a 'good' task can be expensive.

Three ratings drive ordering. They describe current-state quality, not work effort.

Rating Meaning Priority effect
bad broken, missing, or urgent do first
medium works, but should be better do next
good works fine do later (or never)

Why three, not five

Humans compress to ~3 categories reliably. Five drifts (one always collapses into its neighbour within a week). Three is large enough that disagreements signal real differences, small enough that the boundary holds.

Ordering rule

flowchart LR
  in[All tasks] --> rate{rating?}
  rate -->|bad| bad[BAD bucket]
  rate -->|medium| med[MEDIUM bucket]
  rate -->|good| good[GOOD bucket]
  bad --> sb[sort by due] --> out
  med --> sm[sort by due] --> out
  good --> sg[sort by due] --> out[Ordered queue]

Ties broken by due date (sooner first), then by created date. Implemented in tasks/schema.py order_tasks().

Tasting rule (when in doubt)

  • Would this hurt if we never did it? โ†’ bad
  • Would shipping this make me proud? โ†’ good
  • Otherwise โ†’ medium

Mapping to existing repo taxonomy

The repo already uses tiers and severities. They map cleanly:

Existing label Rating
Critical / Tier-A frontiers / I9โ€“I13 violation / blocker bad
Tier-B / open frontier without urgency / signal P1โ€“P2 medium
Tier-C / nice-to-have / closed-but-track good

A migration pass is itself a task: T-005 in tasks/TODO.md.

Why not numeric scores?

Numeric scores invite false precision. The system can't tell 7.3 from 7.4 reliably, so the spread becomes noise. Three buckets force the question which kind of priority, not how many decimal places.

Anti-patterns

  • Inflation drift: every task tagged "bad" โ†’ bucket loses meaning. If the bad bucket exceeds 1/3 of open tasks, audit it (most are probably medium).
  • Permanent good: a task tagged good with no due date is a candidate for abandoned status โ€” close it.
  • Rerating without evidence: rating changes need a one-line why in the task notes.