The 2020 NFL season has been rough on defenses, with the average offense generating positive expected points per play through nine weeks for the first time since 2014 and only the second time during the PFF era (since 2006). Passing has been even more of a boon for offenses, with the average pass play now twice as valuable in 2020 as it was in 2019.
There are a number of explanations for this. I think the most plausible one is that defense is a weak-link, fragile system, and in an offseason without minicamps, a full training camp or preseason games, the cohesiveness necessary to play plus defense is too strained. Additionally, there is a great deal of evidence that, without full stadiums, home-field advantage is all but gone in 2020, allowing for better communication by road offenses across the league.
So, what can defenses do to help themselves? Obviously, having good players is important, and we can measure that using our player grades — available to those with a PFF Elite subscription. We can be a little more intelligent with these numbers, as well, adjusting them for opponent strength and weighing recent data exponentially more than past data (for example, our offensive and defensive ratings used in PFF Greenline). The in-season correlation between expected points added and defensive rating is quite high (r-squared = 0.477 from 2014-2020).
That’s all fine and dandy, but it doesn’t do much good when the hay is in the barn. Sure, there are small things a team can do, such as trade for Desmond King II or get a big-time player back from injury, but for the most part, a team's players are its players. Thus, once all is said and done, coaches need to be able to keep their opponents uncomfortable by being less predictable.
We studied the idea of predictability earlier this season, trying to determine whether a good running game could make defenses less so in coverage (not really). Using Shannon Entropy, we are able to measure how much information each “decision” by a defensive play caller gives the opposing offense. The more uniform one’s collection of coverages is probabilistically, the more entropy. The less uniform (i.e., the more one plays one or a few coverage types), the less entropy.