2015-02-13

The Department of Education is creating its own college ranking system based on access, affordability and performance. US News and World Report has long been aware that it uses a deeply flawed system for assessing colleges' educational quality. In 1997, the National Opinion Research Center.



Both US News & World Report and Washington Monthly rank schools, but their ranking methods have very different effects on who goes to which college.

The Department of Education is creating its own college ranking system based on access, affordability and performance. How will this college ranking newcomer affect two vintage ranking systems - US News & World Report and Washington Monthly?

How the College Ranking Game Is Played

U.S. News and World Report (USNWR) ranks many things - hospitals, health, money, careers, travel, and cars - but is best known for its college rankings. Indeed, since 1983, when the US News and World Report college ranking system was created, it has heavily influenced who applies to which schools and the behavior of colleges.

For example, USNWR's rankings have created financial incentives for colleges to offer "merit" scholarships to students who can actually afford to go to college. The goal is to persuade students whose grade point averages and test scores will raise a college's rank to accept the college's offer.

These merit scholarships have many pernicious effects. Giving scholarships to students who can afford to attend a college means less money to give applicants need-based scholarships. That financial gap is met by raising tuition. As this cycle continues, the cost of college increases.

The Washington Monthly's Alternate Ranking System

In 2001, the Washington Monthly (WM) began responding to USNWR's college rankings by exposing USWNR's pernicious effects on college education. WM has repeatedly pointed out that USNWR's rankings fail to assess the quality of education offered by the colleges it assesses.

The public should react to these failures by disqualifying USNWR from continuing to engage in this game that benefits only USNWR. Indeed, given that private and public colleges depend on public money for funding, the public deserves to know how that money is being spent.

WM has repeatedly pointed out that

there are no numbers, no studies, no objective measurements. In a Washington Monthly survey of 50 randomly selected research-university Web sites, only 12 percent clearly posted the six-year graduation rate, the most basic statistical measure of effectiveness. Even fewer offered information about student satisfaction with teaching.

Without solid information on what they will learn, students must make choices based on geography, particular programs, or reputation. . . .

In 2001, in addition to WMs initial essay criticizing USNWR's rankings, WM created lists of colleges that illustrated the article's criticism. The lists included  the"Top 20" and "Worst 20," "How U.S. News's Top Universities Fared," "How U.S. News's Top Liberal Arts Schools Fared," colleges' "Peace Corps Enrollment," and "ROTC Enrollment."

By 2009, Washington Monthly's alternative ranking system was in place but has continued to evolve. The basic components of WM's college ranking system are evidence of:

(1) community service, based on alumni currently serving in the Peace Corps or Reserve Officer Training Corps, relative to the school's size,

(2) amount of federal work-study money spent on community service projects, based on quality and quantity of research; and

(3) promotion of social mobility, as shown by the school's commitment to educating lower-income children, based on percentage of Pell Grants.

How Does USNWR Differ from WM?

USNWR's college rankings are elitist and backward looking. Its focus is on what applicants brought with them from high school - for example, grade point averages and test scores, such as the ACT or SAT. That sort of ranking method takes students as they are and does little to promote a more just and equitable society.

In contrast, WM ranks colleges based on evidence of students' future promise and on colleges' support for students who show promise. Put another way, the WM rankings essentially say, "If this sort of student is accepted and is given appropriate support, they will be an asset to our society."

Or, more concretely,

Imagine, then, what would happen if thousands of schools were suddenly motivated to try to boost their scores on The Washington Monthly College Rankings. They'd start enrolling greater numbers of low-income students and putting great effort into ensuring that these students graduate. They'd encourage more of their students to join the Peace Corps or the military. They'd intensify their focus on producing more Ph.D. graduates in science and engineering. And as a result, we all would benefit from a wealthier, freer, more vibrant, and democratic country

Going to College?

A year ago, the US Department of Education announced that it would create its own college rating system. The question is whether that rating system will follow the USNWR model or that of Washington Monthly?

Meanwhile, the Department of Education has initiated a public discussion on revamping the college admissions process. The Department of Education has set as its goals for higher education: access, affordability, and performance.

In fact, Washington Monthly readers will find much that is familiar in the Education Department's higher education goals. In fact, they appear to be consistent with the standards long used in the WM annual college guides and rankings.

Unlike US News and World Report and similar guides, this one asks not what colleges can do for you, but what colleges are doing for the country. Are they educating low-income students, or just catering to the affluent? Are they improving the quality of their teaching, or ducking accountability for it? Are they trying to become more productive - and if so, why is average tuition rising faster than health care costs? Every year we lavish billions of tax dollars and other public benefits on institutions of higher learning. This guide asks: Are we getting the most for our money?

For decades now, the Washington Monthly has published its own college rankings, while pointing out obvious problems with USNWR's methodology:

Unfortunately, the highly influential US News & World Report annual guide to "America's Best Colleges" pays scant attention to measures of learning or good educational practices, even as it neatly ranks colleges in long lists of the sort that Americans love. It could be a major part of the solution; instead, it's a problem.

US News' rankings primarily register a school's wealth, reputation, and the achievement of the high-school students it admits.

USNWR has long been aware that it uses a deeply flawed system for assessing colleges' educational quality. In 1997, USNWR commissioned a study of its methodology by an outside organization, the National Opinion Research Center. The NORC study found, "The principal weakness of the current approach is that the weights used to combine the various measures into an overall rating lack any defensible empirical or theoretical basis."

In contrast, the Washington Monthly based its college ranking system on goals that are likely to provide benefits beyond the students who receive that education:

Access, such as the percentage of students receiving Pell grants;

Affordability, such as average tuition, scholarships, and loan debt; and

Outcomes, such as graduation and transfer rates, graduate earnings, and advanced degrees of college graduates.

Comparing Washington Monthly's metrics on colleges with those of the U.S. Education Department shows considerable overlap. Both advocate strengthening the performance of colleges and universities in promoting access, ensuring affordability, and improving student outcomes through the design of the college ratings system.

Both aim to:

(1) help colleges and universities measure, benchmark, and continue to improve across the shared principles of access, affordability, and outcomes;

(2) help students and families make informed choices about searching for and selecting a college; and

(3) enable the incentives and accountability structure in the federal student aid program to be properly aligned to these key principles.

(More information on the Education Department's development of its rating system can be found on its Homeroom blog.)

Who Owns a College Degree?

Who benefits from that education and training? Obviously, the people who get the training and their families do, but it doesn't stop there.

We need to think about who should pay for a college education and why. After all, a college education is among the largest expenses any of us will pay for, yet the benefits of education are not solely confined to the student. In fact, we all benefit from having well-educated doctors, construction workers, teachers and more. The number of professions for which a college education is helpful is increasing: For example, these days apprentice union construction workers receive a large part of their training in college classes.

As I wrote in the Taunton Daily Gazette:

In fact, the training required for the construction trades can equal or exceed the years of education required for a bachelor's degree. The lowest amount of training is two years for laborers, while the training for electricians and some other trades is 5 years or more. Electricians' education includes 900 hours of classroom work and 8000 hours of hands-on training. Just to enter the program requires completion of high school work including Algebra and passing an entry test. Even more math is required along the way. A carpenter's apprenticeship requires 4 years training with 640 hours of classroom instruction and 8000 hours of on-the-job training. Many construction workers earn at least an associates' degree during their training.

The USNWR rankings tell a story of elitism and individualism, but it is a flawed story that ignores the many ways we collectively benefit from having an educated citizenry. The USNWR version of the story overlooks the benefits of education that flow through our communities. Since so much of the costs of post-secondary education are borne by the public, the value of the public's investments deserve to be acknowledged.

The Beneficiaries of USNWR Rankings

Who benefits from the USNWR rankings? We don't have all the data to make that judgment, but it is fair to say that USNWRitself - and its ever growing family of rankings - are the major beneficiaries - not the public.

US News and World Report is a multi-platform publisher of . . . annual guidebooks on Best Colleges, Best Graduate Schools, and Best Hospitals. Focusing on Health, Money, Education, Travel, Cars, and Public Service/Opinion, US News has earned a reputation as the leading provider of service news and information that improves the quality of life of its readers. US News and World Report's signature franchises includes its News You Can Use brand of journalism and its Best series of consumer guides that include rankings of colleges, graduate schools, hospitals, mutual funds, health plans, and more.

For all its experience with rankings, USNWR's history with college rankings has included troubling behavior. One example: The publication has asked school officials to rate their competitor schools. First, the ratings serve no useful purpose, given that schools are regularly inspected in order to retain their accreditation. Second, the ratings were not only highly subjective; they created a temptation for schools to lower a competitor school's rank. That is, "survey respondents may rate down some schools in order to make their own school look better and schools may try to raise their score on the 'rejection rate' factor by encouraging applications from students who have virtually no chance of being admitted."



credit: The Seattle Times

What Education Is Needed for a Democracy?

Students from families who have not gone to college are likely to need special guidance to understand how to get the most from a college education. On August 22, 2013, the White House presented a summary of education issues to be addressed. Among them was creating a college rating system that would help families compare schools and help taxpayers judge whether federal investments in financial aid and educational grants are worthwhile.

While certainly helpful to college applicants, changing to such a system would, as Senator Elizabeth Warren might put it, leave blood and teeth on the floor. The question is whose blood and teeth.

Publishing school rankings has been a big moneymaker for USNWR. On the other hand, USNWR has been willing to changing its criteria over the years. Some of the changes have been motivated by Washington Monthly's criticism concerning USNWR college rankings as focusing in the wrong place.

WMs ranking system advocates assessing the quality of education based on outputs - that is, its rankings are based on evidence of what current students achieve as a result of attending a specific school. Among the outputs that WM's system assesses are number of students receiving need-based Pell grants, actual graduation rates, research expenditures, and Peace Corps and ROTC participation. Each of these factors promotes outcomes that benefit students and their communities.

In contrast, USNWR's ratings use inputs to rank colleges. Those inputs have included factors such as high school grade point averages and admissions test scores. Those factors predict students' grades, but they do nothing to expand students' knowledge and skills.

After more than a decade of Washington Monthly's criticizing USNWR's for using inputs for its rankings, USNWR on December 24, 2014, the USNWR announced that it was including Pell grants as a factor in its school rankings. The federal Pell Grant Program provides need-based grants to low-income students to promote access to post-secondary education.

USNWR's announcement stated:

Currently, the US News Best Colleges rankings methodology incorporates outcome measures such as graduation and retention rates. We also use graduation rate performance, which measures the difference between each school's predicted graduation rate - based on characteristics of the incoming class closely linked to college completion, such as test scores and Pell Grants - and its actual graduation rate. These three outcome factors, in total, count for 30 percent of the rankings and are the most heavily weighted indicators in our methodology.

Waiting for the Education Department's College Rating System

If imitation is the best flattery, the Washington Monthly must be feeling flattered. The Education Department's A New System of College Ratings - Invitation to Comment outlines characteristics it wants in a college ranking system. In particular, it wants a system that promotes students' ability to take advantage of educational opportunities on a more equal footing:

The college ratings system has multiple related purposes. A critical purpose of the ratings system is to recognize institutions that are succeeding at expanding access, maintaining affordability, and ensuring strong student outcomes and setting them apart from institutions that need to improve. By shedding light on key measures, the ratings system will support greater accountability and incentivize schools to make greater progress in these areas of shared priorities, especially at serving and graduating low income and first generation students and holding down the cost of college.

USNWR has dominated the college application process for decades. It will be interesting to see how the government's new initiative affects students' decision-making process and how colleges do or do not revise their application processes.

Is a New Information System for College Applicants Enough?

Whatever ranking system emerges from the Education Department's efforts, can a new ranking system overcome the effects of deep inequality in our schools? It will certainly take time to repair the damage done by the deeply corrupt US News & World Report to our education system.

Meanwhile, there are other educational paths that lead to good jobs, without debt. Those opportunities will be examined in a follow-up story.

[Ellen Dannin writes in the areas of labor and employment, privatization, law, and education.]

======

credit: Cooper Union

The following interview was made available to Portside by Ellen Dannin and Richard Lempert.

Interview with Richard Lempert, Eric Stein Distinguished University Professor of Law & Sociology emeritus, University of Michigan.  Lempert was an early student and critic of the The US News and World Report (USNWR) rankings and discussed the rankings in historical perspective.

The US News and World Report's college rankings are fundamentally flawed. The biggest flaw is their assumption that schools can be ranked on a single dimension. Not only does USNWR make too much of small differences among the scores of different institutions, but the biases built into each scores' components and the gaming that is possible mean that most of the measures used to rate schools are themselves suspect.

If USNWR were serious in their proffered desire to help people, rather than being concerned with selling a product, they would follow the lead of organizations like Consumer Reports and make it clear that schools with scores within defined ranges are, from an applicant's perspective, indistinguishable in terms of quality.

One year in ranking law schools they made a small error in one not terribly important element. If I recall correctly, when they published the "correct" ranking, 30 or more schools found their ranks had changed.

In other years, rankings were shuffled when nothing was different about where the schools stood on the various ranking measures. What had changed was how USNWR decided to weight the different elements.

The most striking example is that, when law schools were originally ranked, Harvard was rated as the nation's fifth best law school. Since Harvard was generally reputed to be the nation's top or second best law school, the validity of the ranking was called into question. The next year changes were made in the weights given the different ranking elements. Harvard was then number 2 behind Yale, and the rankings appeared credible.

This is, perhaps, the best illustration of how subjective and prone to manipulation these so-called objective rankings are.

One reason the USNWR rankings have gained such wide acceptance is because the rankings seem reasonable. Thus, when one sees Yale, Harvard, Stanford, Columbia, NYU, Chicago, Michigan, Virginia and Pennsylvania ranked in that order the picture seems right.

But what people don't realize is that had the rankings been Harvard, Yale, Chicago, Stanford, Columbia, Pennsylvania, Michigan, NYU, Virginia that ranking also would seem right.

Indeed, when the rankings began, a ranking of Harvard, Columbia, Chicago, Yale, Michigan, Stanford, etc. would have seemed right.

If, today, it would seem that Chicago is ranked too high and Stanford and Yale too low, it is because the rankings have become self-fulfilling prophecies - both in the sense that people are used to a certain order, so marked changes wouldn't seem right - and in the sense that law school applicants followed the rankings in choosing among schools, meaning that, once Yale was ranked number 1, the chance that a student with a very high LSAT score would choose to go there rather than to, say, Columbia increased.

Thus, biases built into the original rankings have persisted, even though in the early rankings a number of measures were normalized on a per capita basis with no scientific justification.

My favorite USNWR measurement was library resources. It was measured by books per law student - as if the school would divvy up its library collection among students. This gave Yale, Chicago and Stanford - all small schools - an edge over Harvard, Columbia and Michigan - all large schools - even though the latter had richer library collections and were more likely to have in their own collections books that students and faculty needed.

None of this would have caused serious problems, except that the rankings appeared in a widely read news magazine and were presented as if they were newsworthy facts. This, in turn, led many law school applicants to sort their law school choices by the rankings even when differences among schools were basically non-existent and completely inconsequential in terms of the education received, career options they allowed, and the like.

The combination has hurt law schools, as many of them distorted their own priorities to cater to the U.S. News rankings.

For example, consider the plethora of "merit" scholarships we now have. They were created to attract people with higher LSAT scores *so the school would get a higher USNWR ranking.* That money could have been better used to support students who couldn't afford to go to a school without more financial support, or who, if they attend, must go far deeper into debt than would have been the case, had scholarship money been reserved for those with the most need.

Indeed, it appears that almost all law students, even those at the top, may have suffered from the rankings.

I had several meetings and lots of back and forth with the man who ran the college ranking project and still does.  - I think his name is Bob Morse - and with Ted Gest, the reporter who did the law school stories, whom I later got to know well because we are Oberlin grads of the same era.

They were always sincerely interested in improving their product, and adopted a number of ideas I and others gave them, most notably using standard scores rather than their early method of percentile rankings, to place schools relative to each other.

If you talk to law school deans who met with them at the time, I think you will also hear that they were receptive to suggestions for improvement. They were also aware that the law school deans they talked to were all lobbying for changes that would favor their own schools.

During the first decade of the rankings - and the heart of the story - is that for many reasons every measure was fundamentally flawed, and I believe the entire enterprise still is.

While improvements can be and were made in the early years to make the projects as scientifically sound as possible, within the limitations of the budget, the rankings could not be made scientifically sound, and to present them as news or even as opinion in a news magazine was a journalistic scandal driven only by a desire to make money.

You might analogize the project as a good faith effort to make mining uranium safe for miners while adhering to a budget limitation that made this impossible. The operation might get safer over the years, but even a safer system would still be causing thousands of miners to develop various cancers and suffer early deaths so that the owners could make a profit.

There was, however, one early action that justly fuels cynicism about the effort. In the first survey or two a number of law schools refused to provide data on several of the U.S. News variables.

Rather than omit these schools from the survey, USNWR, with nothing to go on so far as I know, "estimated" the schools' scores on these variables.

Somehow, most of his estimates were low, sometimes far lower than either reasonable or true values. That meant that schools were ranked far lower than they would have been had they cooperated.  Needless to say almost all schools soon began to provide the information requested, although it later turned out that some lied about what they provided.  Indeed, once mean LSAT scores became available from the ABA, USNWR itself called out 20 or more schools for inflating their reported LSAT means.  They also told me years ago that they had no good way to verify reported figures about the proportion of students holding jobs after graduation.

So, you see, one does not have to be cynical about the desire of Morse et al. to offer the best quality product they can develop to realize that regardless of desires and some effort the quality of the rankings is unacceptably low, and the whole project is profit driven.

Portside would like to thank both Ellen Dannin and Richard Lempert for sharing the article and interview with Portside.

Show more