A little late to the game, President Obama has suggested, as part of his sweeping proposal to reform higher education and financial aid, that colleges and universities be ranked. In a world in which nearly everything gets sorted, rated and spit back out into some kind of list, colleges have been at the forefront of ranking subjects for quite some time now. Does it really matter if the U.S. Government steps in to rank them as well? Probably not. But should it be spending taxpayer dollars to do something a lot of other organizations already do?
Students and their families are increasingly demanding to know what they’re getting for their mounting investments in higher education. Meanwhile, according to a piece in the Hechinger Report, several foundations and research centers are already working on new ways to show them. Even some universities and colleges themselves — reasoning that it’s better to come up with their own ratings than have them imposed by someone else — are quietly working on new ways to gauge what graduates learn and earn, though many remain reluctant so far to make the results public.
Obama has proposed that the government publicly rate colleges and universities by 2015 based on such things as average student debt, graduation rates, and graduates’ earnings. That’s information consumers increasingly want. In a survey released in January by Hart Public Opinion Research, 84 percent supported the idea of making colleges disclose information about graduation, job-placement, and loan-repayment rates.
For their part, universities have responded with official skepticism to the idea that the government should add yet another one. But some are privately working on their own ratings systems.
With money from the Bill & Melinda Gates Foundation, 18 higher-education institutions have been at work on something called the Voluntary Institutional Metrics Project, coordinated by HCM, which proposes to provide college-by-college comparisons of cost, dropout and graduation rates, post-graduate employment, student debt and loan defaults, and how much people learn.
It’s that last category that has proven trickiest. After two years, the group still hasn’t figured out how to measure what is, after all, the principal purpose of universities and colleges: whether the people who go to them actually learn anything, and, if so, how much.
The many existing privately produced rankings, including the dominant U.S. News & World Report annual “Best Colleges” guide, have historically rewarded universities based on the quality of the students who select them, and what those students know when they arrive on campus — their SAT scores, class rank, and grade-point averages — rather than what they learn once they get there.
U.S. News has been gradually shifting toward incorporating in its rankings such “outputs” as graduation rates, the publisher says.
Still, the most-popular rankings “have been almost completely silent on teaching and learning,” according to Alexander McCormick, who heads up the National Survey of Student Engagement, or NSSE.
NSSE surveys freshmen and seniors, each spring, at as many as 770 participating universities and colleges about their classroom experiences, how much they interact with faculty and classmates, whether their courses were challenging, and how much they think they’ve learned.
But the project also spotlights a big problem with potentially valuable ratings collected by the institutions themselves: The schools are often unwilling to make them public. While it was conceived in 2000 with great fanfare as a rival to the U.S. News rankings, NSSE remains obscure and largely inaccessible. The results are given back to the participating institutions, and while a few schools make some of them public, others don’t, thwarting side-by-side comparisons.
There are other drawbacks to letting universities rate themselves. One is that the information is self-reported, and not independently verified, potentially inviting manipulation of the figures. In the last two years, seven universities and colleges have admitted falsifying information sent to the Department of Education, their own accrediting agencies, and U.S. News: Bucknell, Claremont McKenna, Emory, George Washington, Tulane’s business school, and the law schools at the University of Illinois and Villanova.
Also, surveys like the one used by NSSE depend on students to participate, and to answer questions honestly. Last year, fewer than one-third of students responded to the NSSE survey. Student surveys are nonetheless a major part of another planned ranking of universities called U-Multirank, a project of the European Union.
Recognizing that it’s not always possible to compare very different institutions — as universities themselves often argue — U-Multirank will measure specific departments, ranking, for example, various engineering and physics programs.
Of the more than 650 universities that have signed on, 13 are American; the first rankings are due out at the beginning of next year.
The League of European Research Universities, which includes Oxford and Cambridge, is already refusing to take part, as are some other institutions. Many of those already do well in existing global rankings including the ones produced by the Times Higher Education magazine, the publishing company QS Quacquarelli Symonds, and the Shanghai World University Rankings.
For all of this activity, there’s evidence that students and their families don’t rely as much on rankings as university administrators seem to fear. Rankings are a mediocre 12th on a list of 23 reasons for selecting a college students gave in an annual survey by the UCLA Higher Education Research Institute [28].