Welcome to Stats Drop, an inundation of rugby league numbers.
If you’re receiving this by email, a reminder that the Datawrapper embeds work best on desktop, then next best if you click through on mobile and least best as presented in the email.
Churn
What does the average NRL career look like? We often focus on the stars, who seem to play forever, or the very bad and unqualified, whose time in the NRL is brief and somewhat funny, but there is an entire middle class of players whose careers go unremarked upon. There are as many solid, workmanlike journeymen as there are candles in the wind, brief blips of luminiscence before being swallowed by a vast and uncaring universe.
Let’s look at all the players who made at least one regular season appearance (excluding replacements) between 2000 and 2024, excluding players who made appearances in 1999 or 2025. This approach captures players whose entire NRL careers have been played in the first quarter of the 21st century.1
It is not perfect, as a handful of players may have just happened to skip 1999 or 2025 but this criteria excludes players whose careers predate the millennium or are continuing that would skew the numbers somewhat and, as noted, the data is a little wobbly as is.
In the 16 team era of the NRL (2007 - 2022), an average of around 470 players were used in a given season. Since 2023, with the addition of the Dolphins and other changes to NRL rostering rules, this has an increased to between 500 and 510.
At no point in the period from 2000 to 2024 do the 1450 playing careers of interest make up the entirety of the NRL. They represent 90% or more of the players between 2008 and 2016, with the balance represented by older players whose time precedes 2000 and younger players whose career is still going.
There are 1450 playing careers that meet this criteria. Based on this sample, the mean NRL career in the first quarter of the 21st century is 68.3 regular season games long, played over 5.2 seasons for 13.1 apperances per season.
The median NRL career is only 37 regular season games, spaced over just four seasons for an average appearance rate of 9.3 games per season.
The discrepancy between the median and mean speaks to the lopsided distribution of games played. A handful of players compile lengthy resumes that skew the mean upwards relative to the median.
The modal NRL career is a single game, representing 119 players, a similar number to those who have at least 200 appearances in this data (128).
Looking at the whole population, we see the retention rate of players has been consistent. From 2000 to 2025, on average in any given season:
79% of players had played the season before
67% of players had played two years before
57% played three years before
40% played five years earlier
26% played seven years earlier
12% played 10 years earlier
<1% played 15 years earlier
These numbers fluctuate from year-to-year but are consistent over the long run. While there is some correlation between the retention rate and time, the slope is effectively zero. You may have expected players having long careers, particularly with the likes of Cam Smith, Ben Hunt and Daly Cherry-Evans, reducing churn but that doesn’t seem to be the case for vast majority of the playing group. Those remain exceptions against the well established trend of exponential decay (λ ≈ -0.2).
But what of quality? Rather than use a volume metric, like WARG, we can use a rate metric, like Z score, to avoid directly correlating quantity on quantity.
From here, we learn two things. While Z score is designed to average out to 100, the actual average is very clearly in the range of 65 to 70. I did not realise (or had forgotten) this about my own handcrafted metric but think it has something to do with the arbitrary cap of +250, which stops occasional games measured in the thousands from skewing averages. I could fix this by inflating the score by 30% across the board but will instead leave it as a monument to my own stupidity.
The second thing we learn is that a lot of long run analytics are bailed out by the very people that analytics fans like to criticise. Good players get long careers. Bad players get weeded out. These decisions are not made by statisticians but by coaches and administrators. Long careers allow for the accumulation of stats and make for good justifications of those metrics in the first place.
There is an upward, weakly correlated trend. Other than the known deficiencies in the principles of production, this may be because Z score lacks resolution (it is meant to be simple) and every career looks largely like everyone else’s at this scale. It is also a function of the fact that there are 500 players churned through first grade every year and at some point, you need to use the bodies you have available. Not all of them are going to be stars but they might still get game time.
Switching to a seasons played, rather than games played, makes this selection effect clearer. The more seasons given, the floor seems to improve while the ceiling is relatively flat. The first season is full of one-hit wonders and outliers but by season four or five, we’re dealing with established professionals and the squibs, the wusses and the dumdums have been rejected earlier in the quality control process.
This line of inquiry originally began as a question about the NRLW. There are players who played Origin or played for the Broncos in the early premiership years whose careers did not last very long. One assumes this is because being a semi-professional rugby league player is a difficult ask at the best of times, and without the infrastructure that is still being put in place to support female players, playing NRLW at a mid-level is incompatible with some career or family paths. Other careers were victims of covid or injury. Today, the NRLW seems dominated by fresh faced 18 to 21 year old girls that don’t have established careers or dependents to consider and can live off $60,000 a year.
About 80 players were used each year in the four team era (2018 - 2020), 140 with six teams (2021 - 2022), 240 with ten (2023 - 2024) and last season’s 12 teams used 307 players, which is a phenomenal rate of growth over just five years. We’ve established the typical retention rates in the NRLM, which we will consider “normal”. Here’s what the NRLW’s churn looks like:
The average retention rates for the NRLW work out to:
58% of players had played the season before (78% in the Big M)
41% of players had played two years before (67%)
30% played three years before (57%)
15% played five years earlier (40%)
Retention is much lower in the NRLW than the NRLM, however, retention is improving. For players in the first three categories, the rate is improving on the order of 2 to 3% per season, so perhaps in another six years or so, the NRLW will reach parity with the men’s competition in this aspect.
For players in the five year category, the rate is flat. Perhaps this is because t=3, but I think this speaks to a top echelon of players - your Brigginshaws, Kellys, Sergii, Uptons, et al - who have made something resembling a career out of being star players and they have managed to make it work. The class of player beneath them who do not have this advantage or commitment or opportunity turns over at a noticeably higher rate for reasons already discussed. The NRL have yet to create an environment in which the journeywoman can exist but it is coming.
Subscribe, share and upgrade
Unfortunately, I must briefly interrupt the flow of charts to use some words to ask you to subscribe to the newsletter.
If you like this level of in-depth statistical analysis of rugby league, and don’t want to be treated like a moron-nerd (Fox manages this with alarming regularity), then the only way to get all future Stats Drops in your inbox is to subscribe.
Word of mouth from you is the easiest way for Stats Drop to find new subscribers and new subscribers keep Stats Drop in print.
You can forward this email to anyone you might think is interested or use the button below.
Upgrading to a paid subscription would be deeply appreciated if you can spare the cash in this time of consumer spending crisis. If not, the first part of Stats Drop, which is frankly the most interesting, NRL-related part, will always be free.
If you prefer, you can use Ko-Fi and get a shout-out in the next newsletter.
This is where the paywall would normally be (actually, it would probably be above the bit where we started talking about the NRLW) but the first and last issues of Stats Drop of the year are free.
Also check out The Almanac, in which I am building up a repository of stats information.
Upside
Trying to predict what a team will do next season based on the last season is a crapshoot. There’s a weak but positive relationship between wins this year and wins next year.
During my very lightweight season preview, I noted that the last four years of the Raiders had basically been the luckiest team in the history of the NRL. Of the 460-odd team seasons played of NRL, three of the top 20 outperformances of Pythagorean expectation are the ‘23 (4th, +3.8), ‘24 (18th, +2.8) and ‘25 (2nd +4.1) Raiders. While teams can escape their fate for a while, the arc of history is long but it always bends back towards Pythagoras.
A less poetic way of describing this is “mean regression”. Pythagorean expectation began in baseball as a way of estimating a team’s “true” winning percentage, as calculated by their runs for and against. This principle works equally well in rugby league - hence Pythago NRL - and is an easy way to see which teams’ win-loss record is reflective of their play on the field or is subject to the vagaries of assigning a binary win-loss outcome to games. Substantial discrepancies can be chalked up to good luck, misfortune or whatever similar verbiage is preferred by the writer. Put another way, teams that win consistently by a lot are expected to be better than teams whose wins are more marginal.
While the relationship between Pythagorean expectation, that is points difference, and actual results on the field is expectedly tight, it is not perfectly aligned. If we assign the discrepancies to luck, what happens to the team’s performance when luck turns?
62% of teams experience mean reversion as we would expect: being lucky in one season, leads to a reduced winning percentage in the next (and vice versa). The remainders buck the trend, by having good luck and then getting better or vice versa. Even so, only 18% of the change in winning percentage from season to season is explained by this effect. There are other, much more obvious factors (coaching and player performance and changes, injuries, scheduling, response to late, surprising rule changes and so on) that have greater impacts on year-to-year changes in performance. Nonetheless, the inverse relationship between performance relative to Pythagorean expectation and change in winning percentage is very real.
Luck is not the only emotional ephemera we can quantitatively consider. There is also Disappointment. Based on the pre-season class Elo rating of the team, we calculate an expected number of wins against the league average as a gauge of what we can reasonably expect out of the team in the season to come. This sets the Disappointment Line and the team needs to win more games than this to avoid a disappointing season.
We don’t expect the Disappointment Line to deliver a high R-squared. The intent is to identify which teams have better seasons than a reasonable pre-season expectation (and vice versa). As expected, about 50% of teams outperform their Disappointment Line. A third exceed the Line by more than 1.5 wins and about 15% do so by more than four. Does performance relative to the Disappointment Line have a mean regressive effect on the next season’s winning percentage?
65% of teams experience a mean reversion effect relative to the performance against the Disappointment Line. Better seasons tend to follow disappointing seasons, and vice versa. The effect, while still mild, is actually more impactful than Pythagorean expectation, explaining 28% of the season to season change.
The interesting thing is we have two mostly independent ways of calculating Luck and Disappointment, both of which show weak but real mean reversion effects on the next season’s performance.
Let’s not do anything too complex. What happens if we just add them together?
Introducing upside. Calculating upside is easy. You add the numbers of wins over/under the Disappointment Line to the number of wins over/under that calculated by Pythagorean expectation. From there, we compare that to the delta in winning percentage for the next season.
While the correlation remains weak, upside has a stronger correlation than either disappointment or luck alone. Further, it gives us a trend. All other things being equal, one unit of Upside (that is, one additional win over Pythagorean expectation and/or the Disappointment line) will cost half a win next season.
In 2023, the 14-9-1 Knights had an upside of 5.0 and the inaugural 9-15 Dolphins had an upside of -3.8. In 2024, the Knights won 2.5 games fewer (half of 5.0), finishing 12-12, while the Dolphins went 11-13, two wins (or half of 3.8) better. These are cherry picked, recent examples that happen to land on the trend line, and there are good rationales for the delta in wins that are not a straightforward statistical rectification, but they are useful for demonstrating the point.
For 2026:
Recall that this is based on the ideal that a team has a “true” level. The team can make changes to improve this “true” level. If the 2025 Raiders were a 15 win team but won 19 games, and the 2026 Raiders are a 16 win team because Ricky Stuart finally got his selections sorted out, a host of young players each get a bit better and they didn’t overthink finals scheduling, then the predicted 13.5 wins becomes 14.5 wins.
It bears repeating that this ladder shows a mean regression effect that explains only 30% of the change in winning percentage from season to season. The other 70% also has a casting vote. Deciding what proportion of that 70% residual is applicable to changes in rosters, development in players, alterations in tactics or strategy or other nuances is an exercise for another time.
Projections
Here’s every player who had a decent run in 2025 and what we expect from them in 2026:
Z score is a rate metric and is not like the actual statistical Z score, although that’s where my mind started when trying to overhaul the Stats Drop system of metrics so that’s why it has that name.
Z score, in this context, measures volume of production per game against a benchmark of an average player at that position, given the minutes played. An average player at that position is given a score of 100 (as noted above, the actual average is more like 70). Individual game Z scores are capped at +250 and -250. A minimum number of games is required to qualify for a season Z score.
Where a player has a qualifying season Z score, we can use that to project their performance in the next season using linear regression to identify a trend line. The result is invariably regression towards the mean. Where available, we will use a weighted average of the last three season of qualifying Z scores, with the most recent season given three times the weight of the season two years prior, to get a more accurate projection. A single season Z score is 50% regressed to the mean but the average of three seasons is only regressed 25% to the mean.
Projections have a band of accuracy of 30 units or so, which is fairly wide considering the gap between 100 and 70 is the difference between star and middle of the field, but predicting the performance of individual players in a given year without any information about that year is always going to be something of a dice roll. The gap between projected and actual Z score is measured in Baxes, which over the course of a season and looking across the performance of a whole squad, I ascribe to coaching.
Nonetheless, star players regularly outperform their projection because they are not the ones who regress to mean. For the vast majority of players, seasons well above or below their typical performance are a rare aberration.
“But the 21st century didn’t start until 2001…” Shut up, Dionysius Exiguus.


