Emmitt Smith isn’t big or fast and he can’t get around the corner. I know all the folks in Pensacola will be screaming and all the Florida fans will be writing me nasty letters, but Emmitt Smith is not a franchise player. He’s a lugger, not a runner. The sportswriters blew him out of proportion. When he falls flat on his face at Florida, remember where you heard it first.
The above words of wisdom were brought to you by long-time recruiting analyst Max Emfinger back in 1987. I bring them up because it’s that time of year again. Signing day is fast approaching, and more and more college football fans are wired to their favorite recruiting service, obsessing over the ratings that “experts” like Emfinger give their school’s recruits and each team’s recruiting class. It’s modern-day alchemy; a pseudo-science that has turned into a thriving, multi-million dollar industry. With that kind of money changing hands, you’d think that people would dig a little deeper into how these recruit ratings are developed. But nobody seems to care, or at least care enough to raise their voice over the hype, anyway. But in my isolated internet kingdom/suicide hotline, I’ll try to convince you not to jump off of that bridge after some Scout 4-star linebacker commits to another school.
A lot of people put a lot of stock into recruiting rankings. Recruiting aficionados believe that recruiting rankings matter because, for the most part, the teams at the top of them are winning games. But does correllation imply causation? The San Diego Union-Tribune put together an analysis last year of teams and their recruiting rankings, and put their results into a lovely PDF for us all to admire: http://www.signonsandiego.com/uniontrib/20070205/images/bluechip.pdf
Of the top ten recruiting classes in the years leading up to 2006, 6 finished outside of the top 10 in the AP poll. Two teams, Miami and Florida State, missed the top 25 altogether. You can read the paper’s conclusions here.
Even if you disagree with the U-T’s conclusions, think about what you’re really saying here. The recruiting rankings correctly predicted that USC, Michigan, Florida, LSU, etc. would be talented.
Wow. Stop the presses. Is it really much of an accomplishment on Rivals’ part to correctly predict that USC, Michigan, Florida, and LSU would be good? Who couldn’t predict that, even without these rankings? These teams are traditional powerhouses. It’s an anomaly when they aren’t the dominant powers in college football. So we’re left with a chicken & egg situation. Are teams good because their recruiting rankings are high, or are their recruiting rankings high because they are traditionally good teams? To answer that, one must look at how each individual player is rated.
The assumption amongst recruiting junkies is that each recruiting service’s team of about 20 experts analyzes each player’s ability and rates them accordingly. That’s pretty hard to believe. Most coaching staffs have a hard enough time evaluating the talent they’re targeting just for their school. The idea that this small team of “experts” can break down the relative ability of the thousands of recruits listed in their database, then rank them accordingly, is just slightly ridiculous. They obviously haven’t seen all of these players in person, and several of these players don’t have any video in their profiles. So how on earth can all these recruits be ranked according to ability? The answer is that they aren’t. Oh, they’re ranked, obviously… but it isn’t according to ability. It’s according to popularity.
It’s a subtle but important distinction. Ratings are not assigned by talent. Ratings are assigned based on which schools are recruiting a particular player. For example, a player with offers from USC, Notre Dame, and Ohio State will be rated higher than players with offers from Akron, North Texas, and Ball State. The clearest proof of this is that it’s fairly common to see a player’s rating change. For Navy fans who follow the recruiting scene, it’s almost an annual joke to see the way a player’s rating changes when he commits to the Naval Academy. A lot of the one-star players magically become two-star, and sometimes three-stars become two-stars, too. The best example of the latter is Bayard Roberts, the 3-star New Mexico LB/DE who became a 2-star after he committed to Navy instead of UTEP. Of course, most recruitniks scoff at Navy football as some insignificant outpost on the I-A recruiting scene. Fortunately, Lisa Horne provided us with a more high-profile example here. But if you follow recruiting, you don’t really need examples. You see ratings change all the time.
Great, but so what? Football coaches are the real experts, right? And if all these guys are recruiting a kid, then shouldn’t that be a good indication of how talented he is? Well, there are a few problems with that assumption.
First, it assumes that the recruiting analysts are actually getting their information from college coaches. But they can’t; NCAA rules prohibit coaches from talking about a recruit until he has either enrolled or signed a Letter of Intent. So where does the information come from? In a lot of cases, it comes from the player himself. Take the (in)famous example of Travis Tolbert. A little bit of self-promotion on a few internet sites, and things start to snowball. Eventually, other sites won’t want to miss the boat, and they’ll start hyping him too. Next thing you know, he’s making Top 100 lists. It all came crashing down once people actually bothered to talk to Tolbert’s coach. But there’s another problem; some coaches will be frank about their players’ ability. Others really want to see their kids get scholarships and will actively promote them. It’s no guarantee that you’ll get a straight answer from coaches, either. Of course, Tolbert’s example is clearly an extreme case. But you don’t have to take things all the way to the extreme to work the system.
Another fundamental flaw in recruit ratings is that they don’t take into account the wide variety in scheme that you find in college football. Let’s look at Sean Renfree. Renfree, listed as a 4-star quarterback by Scout.com, had committed to Georgia Tech before Chan Gailey was fired. Once Paul Johnson was hired, it was obvious to Renfree that he’d be a fish out of water in the spread option, so he de-committed. PJ then went out and got a commitment from a 3-star quarterback named Jaybo Shaw, whom he had recruited while still at Navy. According to Scout’s rating system, that’s a downgrade. But Shaw was a 1,000-yard rusher in high school. Georgia Tech will clearly be better served with him under center in their new offense. Renfree might be all-world to Scout, but to Paul Johnson he’s useless. It isn’t just quarterbacks, either. Different offensive systems place different priorities on certain skill sets, as do defenses– there is a difference between the ideal 3-4 and 4-3 player. But to recruiting services, one size fits all.
A recent column by Tim Stevens in the Raleigh News & Observer found that the average Scout.com rating of the All-ACC first-team was 2.77 stars. Along those lines, Andrew Carter of the Orlando Sentinel asks how on earth some of Florida State’s players could have been so overrated relative to other players within the ACC. After reading these, I decided to take a look at this year’s AP All-America team and the Rivals ratings of those players. Four first-team All-Americans were rated as 5-stars: Tim Tebow, Darren McFadden, Illinois guard Martin O’Donnell, and Penn State linebacker Dan Connor. There were seven who had a 2-star rating. Seven! It’s one thing to say that some fluctuation is inevitable and that maybe a 2-star guy should have been a 3-star. But to whiff on that many future first-team All-Americans? Come on. And I’m sure that the recruiting rankings of the teams that brought these players in would have received a boost if Rivals knew that the group included All-American-caliber talent.
Here’s another thought: if recruiting rankings were all that some people claimed they were, then there shouldn’t be any surprises in college football. Where was Miami’s slide in the recruiting rankings prior to this season? How about Florida State? Why did Nebraska actually regress under Bill Callahan despite the almost universal applause for his improved recruiting by Nebraska fans? Where were the steady recruiting ranking increases of Missouri and Kansas leading up to this season? Or Boston College and Wake Forest? Or Kentucky? Why haven’t North Carolina and Mississippi State ever lived up to the lofty rankings they’ve received over the years? The reason is that recruiting rankings are reactionary. If we follow the recruiting rankings, we should see things coming. Maybe not everything… But it’s disingenuous to ignore all this while hailing recruiting rankings for predicting which teams were going to be good. Every single person reading this blog could probably make correct predictions at the same rate as Rivals and Scout without having any idea of what players each team has recruited.
I’m not saying that all 5-star players are overrated and all 2-star players are underrated. I’m just saying that you need to understand what’s really being evaluated with these star ratings– it isn’t talent. But don’t take my word for it. Jamie Newberg, one of Scout’s top recruiting analysts, said this about last year’s Georgia recruiting class:
Couldn’t have set it better myself.
Recruiting sites serve a purpose. Hell, I read them. It’s fun to get a look at the Mids of the future. These sites are a great source of information. Evaluation? Not so much.