2022 Tiered Rankings and Projections
Links to my projections (which are bad) and rankings (which are good)
I’ve talked a lot recently about how important projections are to my process. It’s not for obvious reasons — some of you might remember a version of this post last year was titled, “My full 2021 projections, and why you shouldn’t care.” In another post from last year that I feel like I could just reprint in full, titled “What types of fantasy analysis is overused?”, I went through a long look at “vacated opportunity” and why that’s such a faulty way of looking at things because just projecting team volume can be so, so difficult. A quote from an even earlier piece from my time at CBS, where I’d reviewed my projections from the season before against the actual outcomes for that season:
For exactly half the teams, my projection was at least 50 plays, passes or runs away from the actual observed outcome.
What this says is for half the teams, I missed somewhere on my team-level volume. Not necessarily all three elements, but one of overall play volume, pass attempts, or rush attempts was pretty far off base from the observed outcome. Some of that is due to things like injuries that we can’t predict, but that’s sort of the whole point with not relying too heavily on projections — chaos is the rule in the NFL, and we need to think in terms of contingencies.
Projections took me longer this year than I can ever remember, and as I’ve gone through that process, I’ve thought through the value of them, and why I still find them to be so useful. It’s not just that there’s a ton of research involved. It used to be that, and that’s still part of it. The process of going through all that evidence is super notable in finalizing my player takes.
But a bigger thing is, over time, I’ve learned to identify specific inaccuracies in the market that are caused by full-season projections. It’s wild how many times doing projections I thought, “Ah, now I get that guy’s price.” Even when I expected that player would look good in a projection, it became clear how hard it was to get to a reasonable team-level projection without having an output that made that player look like a strong pick. And what that means is relative to where they are being drafted, they probably are a strong pick to “beat value.” But incidentally, in the “fantasy analysis that is overused” piece I referenced above, one of the three subheadings was “Relating ADP with some positional rank to define value.” Another quote:
First, you see the analyses that relate end-of-season finish to preseason ADP. “I took him WR30 and he was WR25,” so he helped your team. Second, you see it with projections. “He’s going WR30 and he projects as my WR25,” so he’s a good pick.
I’m just going to keep quoting myself like a complete donkey. I added:
There is just so much more that goes into this, the most obvious of which is opportunity cost. You don’t win fantasy leagues by stacking small wins against ADP; you win fantasy leagues by stacking league-winners. That’s why we call them that. And if you took a guy who was a slight win against ADP — or worse projects as a small win against ADP — you’re ignoring the profiles of all the other players you could draft at that position. My aforementioned podcast partner Mike Leone has called certain running backs who fit this discussion “silent killers” to your roster. If you take a running back in the fourth round at RB25 and he finishes RB20, I can promise you he is a massively negative presence to your team in any kind of win rate or related analysis, both because running backs bust at a high rate so a RB20 finish isn’t that impressive, and also because it’s a near lock that other players who you could have reasonably taken there were much better and were disproportionately on the rosters that did win.
While going through my projections, as I identified these players where I said to myself, “Oh, now I get his ADP,” I was more or less saying, “Now I really see where the error is in the market.” I know that sounds cocky, but for these “boring” players, all it takes is stepping back from the specific projection to see that everything else that could possibly influence their value isn’t positive. Sure, they may look like they are going lower than where they are likely to finish, but in these cases we’re talking about players whose own skill — whether due to age-related decline or having just not shown it yet — suggests strongly they don’t have big upside beyond the “available opportunity.” They aren’t likely to smash in their situation; they are potentially going to be a small win, but that’s about it.
The second part is with these types of players, the chaos tends to flow negatively against them. A teammate unexpectedly breaks out? That teammate soaks up the available opportunity that looked like the reason to draft the boring player. I’m actually vaguely interested in Allen Lazard, but his ADP is driven by this hope he’s Aaron Rodgers’ new No. 1, so it’s a great example to say that if Romeo Doubs is just that dude, Lazard is dust. Similarly, Christian Kirk has never shown us enough to justify where he goes, but he goes there because his contract and target competition suggests he’s the most likely No. 1 in Jacksonville — and I absolutely projected it that way. But what he’s not clearly better than Marvin Jones and company? Then it’s just a flat target tree in a passing offense that might still be below average even if Trevor Lawrence takes a bit of a step forward, out from underneath Urban Meyer’s thumb.
I don’t really have an issue with Lazard or Kirk as talents, but in thinking through league-winning outcomes, I — for example — don’t understand them going ahead of first-round rookie WRs with strong analytical profiles. I do understand why they go ahead of Treylon Burks when I do a projection.
This gets back to how certain we are about these, when — as I referenced above — just a couple of years ago I was at least 50 plays off on one of the key team volume metrics for half the league’s teams. For reference, the entire range of play volume last year from the team who ran the most (Baltimore) to the team who ran the least (Seattle) was 231 plays. The total range of pass volume was actually a bit wider — 237 attempts. For rushes, it was 171 attempts from top to bottom.
Missing by 50 one way or the other actually creates an error bar of 100 plays (50 plus or minus the number I projected). In basically no case am I projecting extreme outcomes, but when I have a lean toward a fast-paced or slow-paced — or pass-heavy or run-heavy — offense, I’m covering what feels like a huge swath of potential outcomes. And yet, it doesn’t necessarily play out that way.
Does this mean my projections are just bad? Listeners to my appearances on Establish the Edge for a projection podcast series with Leone will know we’ve been amazed how close our team volume projections are throughout. We’re probably similar analytically, but I counted the similarities as a huge plus that my process is doing something right, because he’s built so many intelligent automated improvements into his projections from years of work on them, which gives him a great baseline to make the manual adjustments off of (whereas I’m making calls on regression and layering in offseason moves and expectations a little more by feel). I think the similarities of our projections are far more reflective of how little leeway there is in a good baseline projection. There are knowns and unknowns, and there’s a certain strength of track record that impacts how much one might regress numbers to the mean, but you sort of have to land in a certain range with most teams without projecting something that will get you into the types of outcomes I’d describe as chaotic, and those outcomes will take some of your player results well out of line with the market. But those outcomes come to pass every season! We just don’t know what they are, where they’ll hit, and in which direction, so projections kind of have to be a little more… sane. Realistic. I don’t have the right word.
But even within these sane projections, there are a lot of things I don’t feel confident about in my projections. For example, after copying over to the sheet I’ve linked below, I noticed how high I was on James Conner, and while I think he could have a massive workload, I’d also noticed I was running a little hot on Arizona’s offensive touchdowns. I went over to my Arizona projection — one of the first ones I did so I didn’t necessarily have a great feel for baselines — and I noticed a few things. First, I’d probably not regressed Conner’s TD rate enough. Second, I still had Darrel Williams projected for more work than Eno Benjamin, a completely different issue but a spot where I would have felt more comfortable being bullish on Benjamin after some recent positive reports. (To that issue, I did go make a final pass on every offense, and I feel pretty good there aren’t a ton of those situations, but I will say there were some where I was like, “Eh, this could go either way, but this is reasonable enough.”) Anyway, I made a few tweaks to the Arizona backfield, including lowering Conner’s TD rate and also shifting some work off of him because if Eno really is the No. 2, I’m a bit more bullish on his ability to siphon work. Conner dropped something like six or eight spots in my RB rankings, down to RB14 from a spot in the top 10. And just like that, I said, “Ok, I get his price there,” because it was a lot more in line with the market, whereas my initial takeaway was something like, “Man, Conner really pops in my projections.”
The point of that anecdote isn’t about Conner specifically. It’s more that I don’t think my initial projection of Conner was actionable, because I definitely took a firmer line regressing the TD rates of similar backs as I went through all 32 teams. I still have Conner’s number very high, but the reason he popped in my projection is I’m not a robot. There’s a necessary manual element to projections, and I wasn’t consistent enough with the roughly 100 inputs for every team such that I projected every single team the exact same way, because that is an insane expectation. And when I noticed exactly where I wasn’t consistent, the seemingly actionable bit of intel was wiped away.
This is a little more granular, because I’ve already made the point that I don’t even feel confident in my projections even if it was possible to have the same clarity with each projection, such that you’re analyzing every player or situation through the same lens of fairness and objectivity. And by the way, I’ll readily admit I thumbed the scale on some of the players I like more than others. The thing there is sometimes there’s more room to thumb the scale and other times there’s just not, without nuking some other player’s projection in an “unfair” way.
But that’s just in reference to the projections where their outcomes might make sense. In my recent post on contingency-based drafting, the idea was we are drafting guys not because of their projection but because of what the payoff might be if some catalyst occurs. Backup running backs are the easy example — Benjamin still came out lower in my Arizona projection than where I ranked and would draft him, even after the adjustments — but receiving options in the many crowded pass-catching groups make for clear options as well. KJ Hamler’s projection before the Tim Patrick injury was very weak, but I was still drafting Hamler, and now that Patrick is unfortunately out, my expectations for Hamler have increased a ton. The size of that potential increase — before the catalyst is known — is the most relevant element to the case for several talented — or even just potentially talented — players being drafted in spots where it’s tough to actually project any volume. And I for one have never found a good way to appropriately account for that in a projection.
I’ve written versions of most all of this before. The biggest thing I thing I articulated more this season was the reality that the closeness of my team volume and in many cases player projections with Leone’s is an indication of how close projections are around the industry. I already knew projections heavily influenced ADP, but we’re not even getting a range that is reflective of chaotic outcomes. That’s an edge.
You want a rule of thumb? It could be said that in my Lazard and Kirk and Burks example above, that the Lazard and Kirk players who look like steals based on their projections are fades because of that, because the ADP market is baking in a little bit of an element where they are saying, “Ok, we still have to rank them close to their projection, but they need to go lower because there’s not much upside here.” The market, not the projection, is capturing those additional outcomes in that player’s range. But while the movement is directionally accurate, the player rarely moves down far enough from that baseline projection.
On the flip side, the players — like rookies — who are unknowns and thus can’t be projected to just be Ja’Marr Chase right out of the gate, often get drafted quite a bit higher than their projections. Fantasy analysts the industry over argue against them on the terms that their projected volume doesn’t line up with their ADP. The signal is the exact opposite. The market is telling you that these players have upside that can’t be quantified in an August projection, but could materialize.
Everything I wrote above is why I will not be updating my projections beyond today — this is where I came out, but I hope I’ve explained to you why the tedious work of adjusting the back end of projections as August news influences our camp discussions doesn’t necessarily produce actionable intel that wasn’t already gleaned from the simple process of projecting out the team and considering various ways things could go. That’s the stuff that I find valuable in projections, and I’m highlighting in my Offseason Stealing Signals posts, which will continue to release starting this weekend.
The rankings, however, I will tinker a ton, and that’s where I try to capture the full upside of a range of outcomes and the ways I would play drafts. I do allow ADP to influence my rankings, which is to say in most cases I’m not ranking a player way, way above or below ADP. That’s because many will draft straight off this list. I recommend still using ADP as a tool and getting players as late as possible; that’s a very important element of crushing your draft.
I’m not at all satisfied with my first run, but I did get to test run these rankings last night in the wonderfully fun format of the DraftSharks Invitational (SuperFlex, TE Premium best ball with 25 roster spots). I hope to find time to write up that draft and talk through my process in approaching such a unique format, but here’s a link to the board. I did think the rankings held up pretty well while going through that long of a draft, and I made a couple tweaks where I thought I was a little off, including how I prioritized Zero RB candidates.
That’s all for now. Below are the links to both my projections and rankings for paid subs. If you’re not a paid sub, it’s $8/month (or $55 for 12 months). Hopefully you don’t see this as some kind of bait and switch; I understand that running into paywalls is annoying, but I do try to keep this thing affordable. I always try to mention you can subscribe for a few months and get the majority of the most actionable stuff for, say, $24, which would take you into November. Links below.