/cdn.vox-cdn.com/uploads/chorus_image/image/63258494/1136296970.jpg.0.jpg)
Unlike previous years, after all the conference tournaments ended on Selection Sunday, there was still one major unknown: how the committee would make use of the new NET rankings. The shiny new tool was ballyhooed as a fix to the reviled RPI and replaced it not only as the committee’s primary ranking system, but as the metric by which the all important “quadrants” were determined. Interestingly, it doesn’t appear that the final conference championship games had any change on the NET rankings (although this could be the NCAA saving themselves a headache by not releasing the final rankings, and instead only showing us what was available on Sunday morning). But in the end, it seems the NET was used much like the RPI, simply as a more general way to categorize teams, and not as a way to directly compare teams with dissimilar conferences and schedules, as many had hoped.
Indeed, if you take a “macro” perspective the NET rankings correlate quite well with the final seedings. Of the Top 16 NCAA Tournament seeds, all but Kansas and Kansas State found themselves in the Top 16 of the NET. Mid-major teams like Wofford and Utah State, who’s NET ranking was higher than where they would have fallen in the RPI, seemed to get slight seeding bumps over similar teams of the past. And teams in the American Athletic Conference (which for my money may have been the hardest to seed given the odd position of that conference as not quite top-tier, but better than the mid-majors) were seeded nearly in lock-step with their NET rankings.
However were two key places where the committee diverged significantly from the NET, and they come at the more “micro” level: the final at-large teams and the ordering of the top seeds. The last four teams into the tournament, Belmont, Temple, Arizona State, and St. John’s, had lower NET rankings than major conference teams like Texas, Clemson, and NC State that were not invited to the big dance this year. Whether or not that’s a positive or negative development likely depends on your perspective, but it’s understandable how teams with high NET rankings might be miffed that this new, hyped system (that they potentially aimed towards in their non-conference scheduling) ended up not influencing the committee as much as expected. It seems that the committee took overall resume, including non-conference strength of schedule, into account in making these decisions.
There were also some differences between the NET rankings and the relative rankings of the 1 and 2 seeds. Duke still ended up third in the final NET rankings (behind Virginia and Gonzaga), but earned the No. 1 overall seed. Meanwhile, UNC found itself ranked behind both of the top SEC teams, Tennessee and Kentucky, in the NET but leapfrogged them for a No. 1 overall seed. Once again, it seems the committee took a more holistic view of these team’s resumes into account more than their NET ranking, with the possible exception of the decision to place Gonzaga on the top-line.
Those hoping that the NET rankings would provide more clarity into the seeding and selection process will likely find themselves disappointed by these developments, because it’s clear that the nebulous concepts of the “eye-test” and one’s “resume” still played a major role in determining the bracket. However, few can argue that the NET wasn’t an improvement over the RPI (in the old system, Kansas would still be ranked No. 2, and Washington would be No. 22, for example), so its use as a more generic tool to dictate the “tiers” that various teams fell into (much like the RPI was used) is a step forward. The size of that step, though, likely depends on whether your favorite team found itself in the Big Dance or stuck in the NIT.
A non-NET note on the 2-seeds: the biggest complaint and cause for confusion in the bracket (not only on DBR, but amongst the talking heads on ESPN as well) seemed to be the placement of Michigan and Michigan State. Despite completing a 3-0 sweep against the Wolverines on Sunday and being given a higher slot on the s-curve, Michigan State found itself with Duke in the East, while Michigan was shipped out West to face Gonzaga. However, those following the (admittedly sparse) information that the committee provides to the public on how it builds the bracket should not have been surprised at all.
When it comes to the 2-seeds, the committee appears to only have two hard-and-fast rules: a 2-seed cannot come from the same conference as the 1-seed in a given bracket, and the best 2-seed cannot be placed with the top overall seed. Beyond that restriction, they have two competing dictates: to “balance the brackets” competitively and give teams higher on a given seed line “location preference”, with the second seemingly weighted more strongly than the first. That means that, as the second No. 2 seed, Michigan State was sent to the bracket whose regional final was located closest to East Lansing. With the South taken by Tennessee (the top 2-seed), the next closest regional was the East, which is where the Spartans ended up. Michigan, as the lowest rated 2-seed, got the “disadvantage” of being sent out West to the Anaheim regional, even though this was the region with the lowest rated 1-seed. As the committee likes to emphasize ad nauseum, they do not place teams solely based on an S-curve.
Is this a fair and equitable way to place the teams? I think most reasonable people would argue not: despite losing Sunday, most agree that Michigan has a much more favorable road to the Final Four than Michigan State (it’s also worth mentioning that Michigan found itself in the West, and with a sizeable home-court advantage due to a large alumni community spread all around the country, on their road to the Final Four last season). It wouldn’t surprise me if some changes were made to this process next year based on the near-universal outcry that this decision has caused. But the NCAA most certainly followed it’s own stated protocol in making this decision.
DBR Auctions|Blue Healer Auctions| Drop us a line