Grading the Rankings of Colleges and Universities

Summit at George Washington University explores rankings industry and how institutions react to the listings.

November 7, 2014

Grading the Rankings of Colleges and Universities

 

In education, rankings are a perennially popular subject for magazines like U.S. News & World Report, whose yearly listings have become synonymous with popular wisdom—as well as something of an obsession for both analysts and institutions of higher learning.

At the George Washington University on Wednesday, rankings themselves were under the microscope. The summit, “An Inquiry into Rankings in Education: Current Landscape and Prospects for the Future,” was a collaboration between GW’s Graduate School of Education and Human Development and the Center for the Study of Testing, Evaluation and Education at the Lynch School of Education, Boston College.

In anticipation of the U.S. Department of Education’s upcoming rating system for colleges, participants from a variety of disciplines and institutions gathered to discuss the impact that the rankings industry has had, and will continue to have, on education policy and institution behavior.

“One thing we hope to do over the next couple of days is to move toward a research agenda for the field [of rankings],” said GSEHD Dean Michael Feuer in opening remarks.

Henry Braun, Boisi Professor of Education and Public Policy in the Lynch School, who also gave remarks, agreed that higher education researchers and administrators should get involved in the process of college and university rankings.

“The rankings industry is with us to stay, and it’s best to constructively engage with that as researchers and educators rather than oppose it,” he said. “We hope that through this meeting we’ll be able to…lead towards a rankings landscape that is more appropriate and relevant to both educators and consumers.”

During a day of panels, presenters spoke on subjects from “Ranking in Games/Sports: Viewing Educational Institutions as Tournament Participants” to “Finding Best Fit Colleges Using Rankings, Ratings and Student Opinion.”

Michael McPherson, president of the Spencer Foundation and a former president of Macalester College in St. Paul, Minn., discussed when, how and to whom rankings are useful in a talk called “Apples, Oranges and Sour Grapes: When and How Do Rankings Make Sense?”

The raw data provided by ranking entities like U.S. News, Dr. McPherson said, can be useful both to institutions and to potential students. The overall weighted ranking assigned by such lists, however, is less so. He reminded listeners of the California Institute of Technology, which one year jumped from ninth in the U.S. News ranking to first—not because of any change in institutional behavior, but because the publication had changed its weighting system.

The next year, Dr. McPherson said, “They changed [the system] back, because the weights didn’t give what everybody knew was the right answer.” He added to laughter, “This is not science at its best.”

To make rankings more useful, he suggested, rankings should have a logical link to outcomes, meaning one of two weighting systems should be used.

“The first would be based on scientific evidence about how much these different categories matter,” Dr. McPherson explained. “So if you knew how much class size mattered to academic quality, you’d have a reason to give class size a specific weight.”

The second system would be based on value to consumers, he said. “You would ask, how much do people care about small classes versus research spending per student?”

Until such systems are in place, however, Dr. McPherson said that rankings would be prone to fall victim to Campbell’s Law—an adage by social scientist Donald T. Campbell that says, “The more any quantitative social indicator…is used for social decision-making, the more subject it will be to corruption pressures, and the more apt it will be to distort and corrupt the social processes it is intended to monitor.”

In this case, the phenomenon can lead to reactionary and ultimately unproductive behavior geared toward climbing the rankings listings rather than increasing academic quality—limiting seminars to 19 students, for instance, or admitting students with lower test scores in the spring rather than the fall.

“This is what happens when you focus on the indicator instead of what it’s intended to measure,” he explained.

And as the Department of Education’s rating system is developed, he asked, “Will that move [institutions] to produce better outcomes—or just to produce better indicator results?”