Polls affect college football, and I don't like that. I'm not one of those playoff guys. I don't mind that there are often controversies. I like that the entire season means a whole lot. But the polls bother me. First, because they're made up of humans, who are flawed. Second, because they affect bowl selections, which in turn affect the budgets of the programs polled, and without anyone really noticing that a lot of the people contributing to the polls happen to have financial interests in the results of those same polls.
Now some say that we need to take the human element out of the game - leave it to objective facts. I can get behind that idea. However, that doesn't mean that I'm all gung-ho behind the computer rankings. In fact, I don't like the computer rankings, because I think their formulas are actually too complicated.
I think it'd be a better system if the rankings used were objective, without any human element introducing bias, and also clear and concise so that every team knows what they have to do to succeed before the season begins. And that's why I've come up with this set of rankings. I called it the Gunslinger poll in an earlier post, but I don't like that name, because it isn't a poll, and I don't want my name on it if it ends up sucking, which it might.
Here's how it works:
A simple set of rules, applying to every single team. You rank the Division 1-A teams from 1-119 following the following rules:
1) Start from the bottom, with the teams which have lost the most. Example: all 0-10 teams are ranked ahead of any 0-11 teams.
2) Next, moving toward the top, slot in the more victories achieved so far. Example: 5-1 teams are ranked ahead of any 4-2 team. Also, all 5-1 teams are ranked ahead of any 4-1 team. The rationale is that each game presents another opportunity to lose, and a risk a team takes. Therefore, teams who play in conference championships are given an advantage over teams which don't (an unfair disadvantage in the current system, in my opinion).
3) Assume that every game remaining for a particular team may be a win, which is why the first slotting is done with losses. Example: an 0-3 team will be slotted ahead of a 1-4 team, since the 0-3 team can end up winning 8 games in an 11 game season while the 1-4 team cannot.
4) After taking the first three steps, you should have all 119 teams slotted with all teams matched up with the other teams that have the same record. The reason for doing this is simple: at the beginning of the year, the goal for every team is to win every single game on the schedule. Period. No matter whether that team is in a BCS conference or a mid-major, the goal is the same. If a team, like Utah last year, does not get beaten, who is to say that any other team is better. Nobody beat them, nobody can say with absolute certainty that anyone else would. Now, the flaw you may say is that teams play more difficult schedules than others. That's true, and you'll see how I deal with that below. But the truth of the matter is that the guys giving their all on the field don't set the schedules. Schedules are set sometimes a decade in advance, by people who have no idea whatsoever how good the opponents are going to be when they get around to playing the games. Yes, conferences make a difference in terms of schedule strength, but the system accounts for it. I continue...
5) As between teams with the same exact record, rank the team who has played the tougher schedule higher than a team with a weaker schedule. Example: 2 teams, both 4-0, one has played the 10th toughest schedule, the other the 50th. Rank the 10th toughest higher. Schedule strength should be important, but it should have secondary importance to actually winning the games. Also, as the season progresses, schedule strength is fluid, and the stronger conferences end up with better schedule strength because of the conference games.
6) One exception to Rule 5: when two or more teams with the same exact record have played each other head to head, and the team with the lower schedule strength beat the team with the higher SOS, the winning team beween the two shall be slotted immediately above the losing team, even if it jumps several others to get there. Example: Michigan State, with a worse SOS than Notre Dame, will be slotted above the Irish. In the event that there are three teams who beat each other, all with the same record, revert to the SOS.
And that's it. Simple. No algorithyms. No humans deciding who's best. It's a simple formula, the teams know what to do to finish first: win all your games.
Is it a perfect system? Well, no. But nothing is. But the idea is to reward the team that does the most to win every game. A focus on actual achievement. Then, when trying to compare apples and oranges, an objective tiebreaker is used.
So that's how it works. I use the GBE Schedule Strength Ratings for the scale. Here are the top 10 as of last weekend:
10) Georgia: 4-0, 86th
9) Nebraska: 4-0, 84th
8) Texas Tech: 4-0, 65th
7) Southern California: 4-0, 58th
6) Florida State: 4-0, 28th
5) California: 5-0, 119th
4) Wisconsin: 5-0, 77th
3) Alabama: 5-0, 60th
2) Virginia Tech: 5-0, 48th
1) Penn State: 5-0, 25th
The rankings are fluid, especially the SOS portion, and bye weeks shift things around a lot. It does show to me, at least, that Penn State might be slightly underrated by the pollsters. They've played the toughest schedule of any undefeated team so far.
How should teams approach this system? Well, win every game, and schedule good teams in advance. Winning is the key though. If you play a weak schedule, you might be toward the top because of a good W/L record. But playing a weak schedule will pretty much prevent you from winning a title if there are any other teams who run the table.
So that's the system, and those are the current top 10. When I can figure out how to do it, I may link to the entire 119 list on a separate page.
Tuesday, October 04, 2005
The Little Lebowski Urban Achiever College Football Rankings
Posted by LD at 11:25 PM
Subscribe to:
Post Comments (Atom)
0 comments:
Post a Comment