January 20, 2013
Numbers alone rarely tell a complete story and that can be a blessing or a curse, depending on who is using them and how they are doing so. We’re not going to deep into explanations here, but below is a bit of food for thought. If you have specific questions or thoughts, please feel free to contact us.
Kenpom.com offers a wealth of data and we strongly recommend everyone in the world visit that site. Understanding the methodology and assumptions used by Pomeroy and other advanced ranking systems, at least with some degree of clarity, can greatly enhance the usefulness of the data and predictions.
The overall predictive rankings of teams is based largely on their respective offensive and defensive efficiency figures. Although there are often immaterial differences in methodologies used to calculate actual efficiency figures when comparing one system to another, most should approximate one another.
The focus of most users (and rightfully so) of Kenpom and other advanced rating systems is squarely on the adjusted figures. Most people are fine with accepting limitations of different systems without understanding the impact and sensitivity of those limitations on the figures they’re using.
As you’ll see below, it might be best that people don’t get too into the details because you could spend hours making small adjustments to the detail only to come up with answers that aren’t all that different from what you started with. Still, considering the methodologies and assumptions of different ranking systems can be enlightening and useful for some.
BIG TEN DATA (through D-I games of 1/19/2013)
The tables below lists Kenpom offensive and defensive efficiency data for Big Ten teams (all D-I games including nonconference through 1/19/2013).
Offense Example: Minnesota’s OffEff of 4.5 means that their Adjusted Offensive Efficiency is 4.5 better (higher) than their Raw Offensive Efficiency (Note: the average D-I Adjusted Efficiency figure is 99.5). Minnesota’s OffEffRank of 3 means that their Adjusted Offensive Efficiency rank among 347 D-I teams is 3 spots higher as compared to their Raw Offensive Efficiency rank.
Defense Example: Minnesota’s DefEff of 5.1 means that their Adjusted Defensive Efficiency is 5.1 better (lower) than their Raw Defensive Efficiency (Note: the average D-I Adjusted Efficiency figure is 99.5). Minnesota’s DefEffRank of 38 means that their Adjusted Offensive Efficiency rank among 347 D-I teams is 38 spots higher as compared to their Raw Offensive Efficiency rank.
WHY ARE THERE DIFFERENCES? DO THEY MAKE SENSE?
A high-level answer is that a team’s performance each game is adjusted for the level of their competition (and there are other factors to a team’s adjusted efficiency figures including preseason rankings and higher weighting of more recent games). Minnesota, relative to other Big Ten teams, has played a strong nonconference schedule and therefore it’s not surprising they would have some of the larger adjustments.
One thing Kenpom doesn’t take into account is whether a key player was out for a particular game. Thus, when the Gophers hosted South Dakota State and Nate Wolters was out with an injury, Kenpom effectively assumes that Wolters played about 35 minutes.
Minnesota’s defensive efficiency against SDSU was 93.6 and this was one of the Jackrabbits’ worst offensive performances of the season. However, because SDSU’s adjusted offensive efficiency for the season is a strong 108.2, Kenpom adjusts Minnesota’s defensive efficiency for the game down to a much better 85.8.
Logically, the size of this adjustment doesn’t make sense because Wolters was injured and didn’t play in the game. We have estimated the impact of the Wolters’ injury on Minnesota’s overall adjusted defensive efficiency for the season, but that’s too much detail for this article.
However, something to be cognizant of is that there are many games across the world of college basketball for which adjustments, both positive and negative, may not be warranted (logically that is, although under the ranking system they make perfect sense).
In the case of Minnesota there are few examples with more than an insignificant impact, including the SDSU and Memphis games (Joe Jackson sat after playing just 7 minutes; Geron Johnson’s first game back, etc.). At the same time one can point to Trevor Mbakwe having played in less than 55% of the team’s minutes so far this season. As it’s reasonable to assume he’ll play more than that throughout the remainder of this season, an additional adjustment to the predictive adjusted efficiency figures is warranted.
Now, you could spend days running through adjustment exercises for all D-I teams and there probably wouldn’t be many earth-shattering changes in rankings. However, understanding unusual and significant factors in reaching predictive rankings can be worthwhile.Sharing Options: