President Barack Obama wants Washington to grade colleges and universities based on their costs and success rates, but some people are already way ahead of him.
At a time when students and their families are demanding to know what they’re getting for their mounting investments in higher education, several foundations and research centers already are working on new ways to show them.
Some schools – reasoning that it’s better to come up with their own ratings than have them imposed by someone else – are even quietly developing new ways to gauge what graduates learn and earn, though many remain reluctant so far to make the results public.
“One thing everyone seems to agree on is that we should have a good way for people to choose where to go to college,” said Zakiya Smith, strategy director at the Lumina Foundation, a private group that promotes higher education.
The foundation is offering $10,000 in a crowd-sourced competition to come up with the best way to make more user-friendly a White House website called the College Scorecard.
Obama has proposed that the government publicly rate colleges and universities by 2015, based on average student debt, graduation rates, graduates’ earnings and other barometers.
“The answers will help parents and students figure out how much value a college truly offers,” the president said in a speech last month at the University of Buffalo.
That’s information consumers increasingly want. In a survey released in January by Hart Research Associates, 84 percent supported the idea of making colleges disclose information about graduation and job-placement and loan-repayment rates.
“People are looking at, ‘Where do we get the biggest bang for our buck?’” said Terrell Halaska, a partner at HCM Strategies, a higher education consulting firm. “They’re desperately looking for high-quality consumer information. They don’t know where to turn. There are 1,000 different ranking systems out there.”
Universities are skeptical that the government should add yet another one. But some are privately working on their own ratings systems.
With money from the Bill & Melinda Gates Foundation, 18 higher education institutions have been at work on something called the Voluntary Institutional Metrics Project. Coordinated by HCM, it would provide college-by-college comparisons of cost, dropout and graduation rates, post-graduate employment, student debt and loan defaults, and how much people learn. (Gates and Lumina are among the funders of The Hechinger Report, which produced this story.)
But after two years, the group still hasn’t figured out how to measure what is, after all, the principal purpose of institutions of higher education: Whether the people who go to them actually learn anything, and, if so, how much?
The many existing privately produced rankings, including the dominant U.S. News & World Report annual “Best Colleges” guide, have historically rewarded universities based on the quality of the students who select them, and what those students know when they arrive on campus – based on their SAT scores, class rank and grade-point averages – rather than what they learn once they get there.
U.S. News has been gradually shifting toward incorporating in its rankings such “outputs” as graduation rates, the publisher says.
Still, the most popular rankings “have been almost completely silent on teaching and learning,” said Alexander McCormick, director of the National Survey of Student Engagement, another attempt by universities to measure their own effectiveness.
And that, he said, is “like rating the success of hospitals by looking only at the health of their patients when they arrive.”
The National Survey of Student Engagement, which is based at the Indiana University School of Education, seeks to change that calculation. Each spring, it surveys freshmen and seniors at as many as 770 participating universities and colleges about their classroom experiences, how much they interact with faculty and classmates, whether their courses were challenging, and how much they think they’ve learned.
But the project also spotlights a big problem with potentially valuable ratings collected by the institutions themselves: The schools are often unwilling to make them public.
“This tells you something about the sensitivity that exists right now about comparisons of institutions,” McCormick said. “A lot of institutional leaders essentially said, ‘If this is going to be public, we’re not going to do it.’”
So while it was conceived in 2000 with great fanfare as a rival to the U.S. News rankings, the National Survey of Student Engagement remains obscure and largely inaccessible. The results are given back to the participating institutions, and while a few schools make some of them public, others don’t, thwarting side-by-side comparisons.
There are other drawbacks to self-rating.
One is that the information is self-reported, and not independently verified, potentially inviting manipulation of the figures.
In the last two years, seven universities and colleges have admitted falsifying information sent to the Department of Education, their own accrediting agencies, and U.S. News: Bucknell University, Claremont McKenna College, Emory University, George Washington University, Tulane University’s business school, and the law schools at the University of Illinois and Villanova University.
Also, surveys like the one used by the National Survey of Student Engagement depend on students to participate, and to answer questions honestly. Last year, less than one-third of students responded to the national survey. Nonetheless, the surveys are a major part of another planned ranking of universities called U-Multirank, a project of the European Union.
Recognizing that it’s not always possible to compare very different institutions – as universities themselves often argue – U-Multirank will measure specific departments; ranking, for example, various engineering and physics programs.
Of the more than 650 universities that have signed on, 13 are American. The first rankings are due out at the beginning of next year.
“It doesn’t make sense to rank universities only on the level of the university as a whole,” said Frank Ziegele, managing director of Germany’s Centre for Higher Education and one of the coordinators of the project. “The existing rankings focus on a very narrow range of indicators, such as reputation and research, but they’re perceived as being comprehensive.”
The League of European Research Universities, which includes the Universities of Oxford and Cambridge in England, is already refusing to take part, as are some other institutions. Many of those already do well in existing global rankings, including ones produced by the Times Higher Education magazine, a British publication; publisher QS Quacquarelli Symonds; and the Academic Rankings of World Universities by Shanghai Jiao Tong University.
Still, there’s evidence that students and their families don’t rely as much on rankings as university administrators seem to fear. Rankings are a mediocre 12th on a list of 23 reasons for selecting a college that students gave in an annual survey by the University of California, Los Angeles, Higher Education Research Institute.
“People appreciate information,” said Smith of the Lumina Foundation. “When you buy a car, a lot of things may come into consideration, but you still want to know what the gas mileage is. And you have the right to know.”