Live · 7d
Skill Leaderboard
Methodology

How we rank skills.

Every score on this site is computed from public GitHub data using the formula below. No editorial weighting, no vendor install counts, no pay-to-rank. The goal is simple: reward the projects people are actually using, forking, contributing to, and returning to.

Why traction, not stars

GitHub stars used to mean something. A repo with 10k+ stars signaled a mature, widely-used project — but that era is over. Investigations into GitHub's fake star economy have flagged millions of artificial stars across the ecosystem, including one VC-darling project that topped a well-known ROSS Index with 74,300 stars — nearly half flagged as artificial, with 52% of stargazers coming from zero-follower accounts and a fork-to-star ratio of just 0.052.

Once a metric becomes a target (Goodhart's Law in action), it stops being a metric. Because VCs, rankings pages, and trending lists still lean on raw star counts, an entire gray-market ecosystem of star-farms, bot accounts, and paid promotion has sprung up to game them.

Current Traction is our answer: a multi-signal 0–100 score using recent attention, sustained growth, repo activity, adoption and forks, and anti-gaming penalties. The leaderboard displays that score as a simple band label instead of showing the raw number in the table.

Formula

Forks and unique contributors get the most weight because they're the hardest to manufacture. Stars carry less weight than most public rankings give them — useful, but discounted. Each signal is measured over a rolling 14-day window, normalized against the repo's own recent baseline, and then combined into the Current Traction score.

SignalWeightRationale
Δ Forks (14d)25%High-intent, low fake rate
Unique contributors (14d)20%Hardest signal to bot
Commit velocity vs. baseline15%Real build activity
Δ Stars (14d)15%Useful but gameable
Δ Watchers (14d)10%Sustained interest
Issue / PR engagement10%Community depth
Dependents growth5%Real-world adoption

Anti-gaming penalty

Each trigger applies a graduated penalty (5–25 points) so genuine viral moments aren't wrongly suppressed. We persist the penalty breakdown alongside the score — anomalies are explainable, not just numeric.

  • Star burst anomaly
    >40% of lifetime stars arriving inside a 72-hour window
  • Low star-to-fork ratio
    Stars vastly outpace forks beyond a healthy band (e.g., >500:1)
  • Account-age skew
    Stargazers with accounts <30 days old or zero public activity
  • Timing cluster
    Stars arriving in suspiciously tight, uniform intervals
  • Geographic cluster
    Disproportionate origin from known star-farm regions

Normalization pipeline

  1. Compute raw 14-day deltas per signal.
  2. Convert each to a z-score against the repo's own 90-day rolling baseline — rewards acceleration, not absolute size.
  3. Winsorize at p95 so one viral tweet doesn't dominate the leaderboard.
  4. Apply a 7-day half-life decay to weight “right now” over “two weeks ago”.
  5. Weight and sum per the table above.
  6. Subtract penalty P.
  7. Map to a 0–100 scale using min-max against the current cohort.

Current Traction bands

The raw score still powers sorting, ranking, and the bar fill. The visible table label is intentionally simplified to one of five score bands.

Exceptional90–100

Top-band multi-signal traction

Strong75–89

Clear traction across recent and sustained signals

Active60–74

Healthy current movement

Moderate40–59

Some traction, but below the active band

Slow0–39

Limited recent traction

Refresh cadence

  • GitHub stats pulled every 6 hours via the REST + GraphQL APIs.
  • Current Traction scores recomputed once per hour from cached deltas.
  • Raw daily snapshots retained ≥90 days to support baseline z-scores and audit trails.

What we don't do

  • No vendor install counts. We rank public GitHub signal, not marketplace telemetry.
  • No paid placement. There is no boost, ad slot, or featured tier.
  • No LLM-based ranking. Categories are LLM-classified once; the score itself is deterministic.
  • No bundles in the main board. Single-skill repos and multi-skill bundles are ranked separately.
Spot a methodology issue or a repo that's being gamed? Check the trending page — anomalies surface there first.