2015-11-09

It seems that nearly every major media publication in the United States these days wants to rank colleges. The latest outlet to get on board? The Economist, which scores higher-education institutions based in part on how much graduates earn. But lots of publications’ rankings look at future earnings and, more generally, ROI—return on investment. The Daily Beast’s “Down & Dirty Guide to the Best Colleges,” for example, uses data on graduates’ salaries to inform its “Best ROI” list (which appears to be far less popular than the guide’s “25 Sexiest Colleges” list). And then there’s Money, whose notably nuanced rankings system recently got a shout-out from the (rankings-less) Washington Post for “[coming] the closest” to “[cracking] the code on answering the ROI question.”

More on COLLEGE RANKINGS



A Roundup of All Those College Rankings

Calculating the ROI is indeed a common objective within the ever-expanding college-rankings world. Evaluating the institutions’ research capacity is a popular goal, too, as is gauging students’ social mobility. Then there are the standard academic factors—graduation rate, average GPA, faculty quality—and, of course, the quality-of-life ones: happiness, for example.

U.S. News & World Report has largely remained the most influential ranker since it launched its guide in the 1980s. But it lost its monopoly on the market once the recession hit, which created a “temporary circumstance” that, as the Brookings Institution’s Jonathan Rothwell put it, “was hitting up against long-term increases in college tuition.” That’s when prospective students started to really question the worth of higher ed—“to be,” Rothwell said, “more circumspect in whether they’re willing to invest their money and time in pursuing an education.” The rankings that were developed during this era—from Forbes’s “Top Colleges” to The Upshot’s “College Access Index” to Rothwell’s own “value-added” list—are in many ways a response to these shifting priorities. They’re also a reflection of new IT capacity that enables access to and comprehensive analysis of troves of institutional data.

Now, each new attempt at grading colleges appears to raise even more doubts about the still-influential U.S. News rankings, a system The Atlantic’s Gillian White recently questioned for its failure to tell prospective students “what they most need to know.” The federal government’s new College Scorecard—which doesn’t rank schools but allows users to filter and sort institutions based on factors including academic program, location, attendance cost, graduation rate, and salary expectations—reinforces that trend. U.S. News “had too much influence that was starting to negatively affect the behavior of colleges and students,” Rothwell said, in part because it grades colleges based on things like alumni giving, faculty pay, selectivity, and reputation. Eventually, colleges started “gaming the system, trying to get more people to apply to their schools even if [those people] had no chance.”

The question is: Will that competition actually help refine the country’s higher-education landscape and improve students’ college outcomes—or will it only add to the hodgepodge of misinformation and political conflict, making the college-exploration process even more confusing for the students who need the most support?

“The data are of such poor quality, that [ranking colleges] is completely misleading.”

In 2013, The Atlantic’s John Tierney gave readers their “annual reminder” to ignore the U.S. News college rankings. “The list’s real purpose,” he argued, is “to ‘exacerbate the status anxiety’ of prospective students and parents.” But he concluded by acknowledging that few readers would likely heed his warning. And as the former Atlantic editor Eleanor Barkhorn reported a few months later, Tierney was right: She cited a report out of the American Educational Research Association finding that both the U.S. News and Princeton Review lists actually have a huge impact on where students apply to college. Inclusion in U.S. News’s top-25 list—regardless of whether it’s in the No. 1 spot or No. 25—boosted the number of applications received by college between 6 and 10 percent. The study’s authors attributed the rankings’ influence to their ability to simplify the college-application process at a time when prospective students are overwhelmed and undergoing major information-overload.

And as much as people love to hate the U.S. News’s “Best Colleges” list, they probably hated (or would’ve hated) the pre-U.S. News’s “Best Colleges” era even more—and for similar reasons. In an article last year about Northeastern University's notorious gaming of the rankings, Boston magazine explained that in creating a formula to grade colleges, the U.S. News editors “quantified something previously thought to be intangible”:

For generations, colleges and universities had generally relied on a mysterious brew of prestige and reputation. Suddenly, legacies and tradition—qualities that had taken decades, and sometimes centuries, for schools to cultivate—were less important than cold, hard data. Schools that once relied on children of alumni and word of mouth were exposed by their own stats, including graduation and retention rates, admissions data (acceptance rate, average SAT score), academics (class size, number of full-time faculty), and reputation (peer reviews). Needless to say, U.S. News’s college rankings landed on the world of higher education with a thud.

Many of today’s myriad college rankings share certain priorities, but each has its own unique algorithm for weighting the criteria and calculating the data. In a 2011 New Yorker critique of such lists, Malcolm Gladwell highlighted a challenge faced by U.S. News and all the other organizations that have since sought to grade schools: “There’s no direct way to measure the quality of an institution—how well a college manages to inform, inspire, and challenge its students. So the U.S. News algorithm relies instead on proxies for quality—and the proxies for educational quality turn out to be flimsy at best.”

The only thing they all seem to have in common is that they’re imperfect.

What’s more, each system is, as Gladwell suggested, subject to a publication’s biases. Both the Washington Monthly and The Upshot’s lists, for example, focus on social impact and their efforts to expand access to low-income students, assessing colleges in part by how many Pell grant recipients they enroll. But the latter limits its list to colleges where at least 75 percent of students graduate within five years, which effectively favors private colleges. And while the former does incorporate institutions’ graduation rates into its algorithm, it doesn’t use a certain graduation rate as a criterion, so it’s hard to say how Pell grant recipients specifically are performing academically at its top colleges. “We’d like to know how many of these Pell Grant recipients graduate, but schools aren’t required to report those figures,” the magazine explained in an article accompanying its 2015 guide.

In fact, the only thing they all seem to have in common is that they’re imperfect. Every ranking has been criticized for failing to demonstrate exactly how well schools are serving students and fulfilling their missions. And the more recently developed rankings, too, have been called out for undermining the quality of higher education by incentivizing colleges to overemphasize certain priorities. This can entail encouraging underqualified candidates to apply so that the institutions look more selective or spending extra money on fancy facilities and other attractive amenities, often at the expense of tuition-paying students.  “The college rankings, whether they’re U.S. News, The Princeton Review, or others, cause institutions to behave badly in my estimation,” the Center for American Progress’s David Bergeron told CQ Researcher earlier this year.

Shortcomings in the available data mean that even rankings that strive to counter that trend, even Rothwell’s value-added list and the federal government’s Scorecard, are flawed. That’s because U.S. Department of Education data—the same data that informs the media organization’s rankings—is based on “a very traditional model” of what it means to be a student, said Doug Falk, the chief information officer at the National Student Clearinghouse, which compiles and manages tons of granular, confidential college data. Those “traditional” students are only those who start and graduate from the same given institution, Falk said, even though more than half of students these days are either enrolled at a school other than the one they started at or are leaving their previous college for another.

“Institutions are increasingly serving those students, but they’re getting left out of the rankings,” he said, adding that rankings can be deceiving in that they “create the illusion that all a student has to do is go to a certain institution and they’ll produce the same outcomes” as the ones highlighted in the guides. “The data are of such poor quality, that [ranking colleges] is completely misleading.”

“There’s more to life than earnings. The primary purpose of schools is to find opportunities of learning.”

Rothwell echoed Falk’s concerns, noting that one problem “with even the best rankings is that in the end you’re limited to what data is available and the data is somewhat sparse.” Earnings “are certainly of relevance to students; they’re definitely of relevance to taxpayers and policymakers. They’re important to health and happiness and everything else. But there’s more to life than earnings. The primary purpose of schools is to find opportunities of learning.” And there’s no measure for that—at least not yet.

This might suggest that any new guide is only contributing to what Gladwell might describe as the “flimsiness” of the college-rankings world. And that proliferation could only make a prospective college student all the more frazzled—all the more inclined to resort back to the comfortable simplicity of the U.S. News’s “Best Colleges” guide. But others still argue that each new version can offer new insight into what ingredients an ideal college-ranking system needs. “I do think one could go too far here—you can imagine if there are another 100 rankings, it could be truly overwhelming,” Rothwell said, presenting his prediction of the future. “You’d think there’d be some movement or convergence toward the ones that are providing the best information.”

Show more