/cdn.vox-cdn.com/uploads/chorus_image/image/47582391/usa-today-8895443.0.jpg)
The first college football playoff committee rankings are out and while the top four was very similar to what some expected, there were certainly a few surprises along the way.
It appeared the committee held fast to last year's mantra that strength of schedule and resume were of utmost importance. Along with these metrics, some additional factors alluded to in the post release interview also played a part, as well as the ever important eye test when the committee put forth the first set of rankings. It is now up to the fans to analyze, interpret and over analyze their meaning, so let's get started.
In his post ranking interview, committee chairman Jeff Long noted that one of the differentiating factors between teams was wins against teams with a .500 record or better. He also mentioned that the committee was looking for a balance of offensive and defensive abilities. These two nuggets can certainly be factored into future analysis, though it seems that they are likely second order effects. It got me wondering, however, which metrics were the committee relying on primarily to determine these initial rankings?
In an attempt to answer this, I measured the correlation between several popular metrics with the committee's rankings. The chart below shows the rankings and corresponding correlations. Without getting into a stats class here, the closer the correlation value is to one, the more accurately the metric can be used as a predictor of the committee's rankings, which we can interpret to mean the more that metric, or something like it, is being used by the committee.
Committee Ranking | Team | ESPN Sos | Sagarin SoS | ESPN FPI | ESPN SoR | Sagarin Rating | F/+ |
1 | Clemson | 18 | 28 | 7 | 1 | 2 | 1 |
2 | LSU | 28 | 37 | 8 | 3 | 11 | 3 |
3 | Ohio State | 72 | 68 | 4 | 12 | 6 | 6 |
4 | Alabama | 3 | 9 | 6 | 2 | 1 | 2 |
5 | Notre Dame | 14 | 16 | 9 | 9 | 8 | 5 |
6 | Baylor | 109 | 104 | 1 | 15 | 3 | 7 |
7 | Michigan State | 57 | 59 | 19 | 7 | 20 | 14 |
8 | TCU | 48 | 53 | 2 | 6 | 4 | 10 |
9 | Iowa | 63 | 47 | 29 | 8 | 15 | 13 |
10 | Florida | 6 | 15 | 12 | 4 | 9 | 11 |
11 | Stanford | 25 | 23 | 13 | 13 | 10 | 12 |
12 | Utah | 24 | 19 | 20 | 11 | 16 | 18 |
13 | Memphis | 88 | 84 | 36 | 10 | 27 | 21 |
14 | Oklahoma State | 66 | 70 | 14 | 5 | 14 | 22 |
15 | Oklahoma | 49 | 52 | 3 | 16 | 5 | 8 |
16 | Florida State | 62 | 64 | 15 | 20 | 17 | 20 |
17 | Michigan | 46 | 39 | 18 | 31 | 13 | 4 |
18 | Ole Miss | 40 | 41 | 10 | 17 | 12 | 15 |
19 | Texas A&M | 17 | 22 | 17 | 19 | 24 | 26 |
20 | Mississippi State | 42 | 57 | 16 | 24 | 21 | 16 |
21 | Northwestern | 31 | 24 | 57 | 23 | 41 | 43 |
22 | Temple | 92 | 97 | 45 | 21 | 40 | 35 |
23 | UCLA | 37 | 36 | 22 | 28 | 26 | 27 |
24 | Toledo | 113 | 116 | 43 | 14 | 34 | 29 |
25 | Houston | 124 | 124 | 33 | 18 | 25 | 25 |
Correlation | 0.1409 | 0.1430 | 0.4252 | 0.5731 | 0.5886 | 0.6377 |
From the chart, it is clear that the metrics which correlate best to the current set of rankings seem to be those that combine factors of strength of schedule, team efficiencies and predictive measures. This isn't much of a surprise due to the committee's multi-faceted approach, but it is interesting to note that the strongest correlation is the F/+ metric, which is based on play by play efficiencies. Factors such as strength of schedule are implicit in this calculation and it is considered a predictive metric, but it is a complex metric that is unlikely to be used by the committee due to the rule that every member must fully understand each statistic being discussed. Nevertheless, the superior performance of F/+ to the Sagarin Ratings metric seems to indicate much what Jeff Long stated, that in addition to the eye test and resume scrutiny which most fans are familiar with, consideration of offensive and defensive prowess are very much in play.
While it will be interesting to see if this correlation holds true further into the season, or if some combination of these metrics can provide a better predictor of committee results, what about those teams that don't fit the mold?
Outliers
One of the most interesting points to note with the F/+ correlation is those teams that do not fit into this theory. Two teams which stand out as significant outliers are Michigan, who falls at 17 despite their 4th ranked F/+ standing and Northwestern, who landed at 21 despite their 43rd ranked F/+. These teams each have two losses, but the Wolverines' relatively low ranking and Northwestern's seemingly high ranking can both be explained through a yet undiscussed factor - quality wins.
While Michigan's best win is seemingly over Northwestern (there was also the inexplicable, last-gasp loss to Michigan State), the Wildcats hold a week one victory over 11th ranked Stanford. Due to this quality win, Northwestern has made the top 25 and is ranked ahead of undefeateds Houston and Toledo, despite two losses and no other victories in the committee's top 25. This seems a clear statement by the committee that big wins, whether recent or during the season opener, will not be forgotten.
Similarly, this should be great news for Cardinal fans. Northwestern's victory being viewed as a major accomplishment for the Wildcats implies a high held opinion of Stanford by the committee. In addition, having the Cardinal's only loss stay in the top 25 makes the week one blemish far less unsightly. More good news for David Shaw's team is future regular season opponent Notre Dame sitting at 5th and potential Pac-12 Championship Game opponent Utah at 12th.
While this is only the first release of the committee rankings for this season, we have learned much about the type of teams they are willing to support through the remainder of the season. Big games this weekend will undoubtedly shake up the picture and it will be interesting to follow whether advanced metrics and consideration to big wins can predict future committee rankings.