Higher Education throughout the world

Higher Education

Note more spreadsheet sub-pages are available at bottom of document

The State of the Rankings

November 11, 2010

Philip G. Altbach

With the arrival of the new academic year in much of the world, the rankings season must be under way. The major international rankings have appeared in recent months — the Academic Ranking of World Universities ([ARWU, the “Shanghai Rankings,”), the QS World University Rankings, and the Times Higher Education World University Rankings (THE). Two important U.S. rankings have also been published — the U.S. News & World Report America’s Best College Rankings, and the much-delayed National Research Council’s Assessment of Research Doctorate Programs. These are but a few of the rankings available on national or regional postsecondary institutions. For example, the European Union is currently sponsoring a major rankings project. In Germany, the Center for Higher Education Development has formulated an innovative approach to rankings of German universities. The list can be extended.

The Inevitability of Rankings

If rankings did not exist, someone would invent them. They are an inevitable result of mass higher education, and of competition and commercialization in postsecondary education worldwide. Potential customers (students and their families) want to learn which of many higher education options to choose — the most relevant and most advantageous. Rankings provide some answers to these questions. It is not surprising that rankings became prominent first in the United States, the country that experienced massification earliest as a way of choosing among the growing numbers of institutional choices. Colleges and universities themselves wanted a way to benchmark against peer institutions. Rankings provided an easy, if highly imperfect, way of doing this. The most influential and most widely criticized general ranking is the U.S. News & World Report America’s Best College Ranking, now in its 17th year. Numerous other rankings exist as well, focusing on a range of variables, from the “best buys” to the best party schools, and institutions that are most “wired.” Most of these rankings have little validity but are nonetheless taken with some seriousness by the public.

As postsecondary education has become more internationalized, the rankings have, not surprisingly, become global as well. Almost three million students study outside their own countries; many seek the best universities available abroad and find rankings quite useful. Academe itself has become globalized, and institutions seek to benchmark themselves against their peers worldwide — and often to compete for students and staff. For all their problems, the rankings have become a high-stakes enterprise that have implications for academe worldwide. For this reason alone, they must be taken seriously and understood.

Rankings Presume a Nonexistent Zero-Sum Game

There can only be 100 among the top 100 universities, by definition. Yet, because the National University of Singapore improves does not mean, for example, that the University of Wisconsin at Madison is in decline — even if NUS rises in the rankings, perhaps forcing some other institutions down. In fact, there is room at the top for as many world-class universities as meet the accepted criteria for such institutions. Indeed, as countries accept the need to build and sustain research universities and to invest in higher education generally, it is inevitable that the number of distinguished research universities will grow.

The investments made in higher education by China, South Korea, Taiwan, Hong Kong, and Singapore in the past several decades have resulted in the dramatic improvement of those countries’ top universities. Japan showed similar improvements a decade or two earlier. The rise of Asian universities is only partly reflected in the rankings, since it is not easy to knock the traditional leaders off their perches. The rankings undervalue the advances in Asia and perhaps other regions. As fewer American and British universities inevitably appear in the top 100 in the future, it will not mean that their universities are in decline. Instead, improvement is taking place elsewhere. This is a cause for celebration and not hand-wringing.

Perhaps a better idea than rankings is an international categorization similar to the Carnegie Classification of Institutions of Higher Education in the United States. Between 1970 and 2005, the Carnegie Foundation provided a carefully defined set of categories of colleges and universities and then assigned placements of institutions in these categories according to clear criteria. The schools were not ranked but rather delineated according to their missions. This would avoid the zero-sum problem. Many argue that the specific ranking number of a university makes little difference. What may have validity is the range of institutions in which a university finds itself. Moreover, what may be useful is whether an institution is in a range of 15 to 25 or 150 to 170 — not whether it is 17 or 154. Delineating by category might capture reality better.

Where Is Teaching in International Rankings?

In a word — nowhere. One of the main functions of any university is largely ignored in all of the rankings. Why? Because the quality and impact of teaching is virtually impossible to measure and quantify. Further, measuring and comparing the quality and impact of teaching across countries and academic systems are even more difficult factors. Thus, the rankings have largely ignored teaching. The new Times Higher Education rankings have recognized the importance of teaching and have assigned several proxies to measure teaching. These topics include reputational questions about teaching, teacher-student ratios, number of Ph.D.s awarded per staff member, and several others. The problem is that these criteria do not actually measure teaching, and none even come close to assessing quality of impact. Further, it seems unlikely that asking a cross-section of academics and administrators about teaching quality will yield much useful information. At least THE has recognized the importance of the issue.

What, Then, Do the Rankings Measure?

Simply stated, rankings largely measure research productivity in various ways. This is the easiest thing to assess — indeed, perhaps the only things that can be reliably measured. Some, especially QS, emphasize reputational surveys — what do academics around the world think of a particular university? As a result, QS mainly assesses what a somewhat self-selected group of academics thinks of various universities, along with some other non-reputational factors. Times Higher Education looks at a number of variables, including the opinions of academics, but, along with its data partner Thomson Reuters, it has selected a variety of other variables — the impact of articles published as measured by citation analysis, funding for research, income from research, and several others. The Shanghai-based Academic Ranking of World Universities measures only research and is probably the most precise in measuring its particular set of variables.

Research, in its various permutations, earns the most emphasis since it is relatively easily measured but also because it tends to have the highest prestige — universities worldwide want to be research intensive and the most respected and top-ranking universities are research focused. These two factors have been a powerful force for reinforcing the supremacy of research in both the rankings and in the global hierarchy.

Centers and Peripheries

The universities and academic systems located in the world’s knowledge centers, and the scholars and scientists in these institutions, not surprisingly have major advantages in the rankings. The academic systems of the major English-speaking countries such as the United States, the United Kingdom, Canada, and Australia have significant head starts. Historical tradition, language, wealth, the ability to attract top scholars and students worldwide, strong traditions of academic freedom, an academic culture based on competition and meritocracy, and other factors contribute to the dominant positions of these universities.

All of the rankings privilege certain kinds of research and thus skew the league tables. There is a bias toward the hard sciences — the STEM fields (science, technology, engineering, and mathematics) — which tend to produce the most articles, citations, and research funding. The rankings are biased toward universities that use English and the academics in those universities. The largest number of journals included in the relevant databases are in English, and it is easiest for native English speakers and professors at these universities to get access to the top journals and publishers and to join the informal networks that tend to dominate most scientific disciplines.

Universities in Western Europe and Japan have relatively easy access to the key knowledge networks and generally adequate support. Academic institutions in Hong Kong and Singapore have the advantage of financial resources, English as the language of teaching and research, and a policy of employing research-active international staff. This trend has permitted their universities to do well in the rankings. The emerging economies, most notably China, are increasingly active as well, and they are moving from periphery to center. Even well-supported universities in peripheral regions, such as the Middle East, have disadvantages in becoming academic centers. There are strong links between the central or peripheral status of a country or academic culture and the placement of their universities in the rankings.

In the age of globalization, it is easier for academic institutions to leapfrog the disadvantages of peripherality with thoughtful planning and adequate resources. Individual academics as well as institutes and departments can also make a global mark more easily than ever before. While the barriers between centers and peripheries are more permeable, they nonetheless remain formidable.

Changing the Goalposts

Many of the rankings have been criticized for frequently changing their criteria or methodology, thus making it difficult to measure performance over time or to usefully make comparisons with other institutions. US News & World Report has been particularly prone to changing criteria in unpredictable ways, making it extremely difficult for the colleges and universities providing data to do so consistently. It is likely that the Times Higher Education rankings, in their first year, will change to some extent as an effort is made to improve the methodology. The Shanghai rankings have been most consistent over time, contributing no doubt to the relative stability of institutions and countries.

A 2010 Critique

It may be useful to analyze briefly the main rankings as a way of understanding their strengths and, more important, their weaknesses. While this discussion is neither complete nor based on a full analysis of the rankings, it will provide some reasons for thinking critically about them.

The QS World University Rankings are the most problematic. Between 2004 and 2009, these ranking were published with Times Higher Education. After that link was dropped, Times Higher Education began publishing its own rankings. From the beginning, QS has relied on reputational indicators for a large part of the analysis. Most experts are highly critical of the reliability of simply asking a rather unrandom group of educators and others involved with the academic enterprise for their opinions. In addition, QS queries the views of employers, introducing even more variability and unreliability into the mix. Some argue that reputation should play no role at all in ranking, while others say it has a role but a minor one. Forty percent of the QS rankings are based on a reputational survey. This probably accounts for the significant variability in the QS rankings over the years. Whether the QS rankings should be taken seriously by the higher education community is questionable.

The Academic Ranking of World Universities (ARWU), often referred to as the Shanghai Jiao Tong rankings, are now administered by the Shanghai Rankings Consultancy. One of the oldest of the international rankings, having been started in 2003, ARWU is both consistent and transparent. It measures only research productivity, and its methodology is clearly stated and applied consistently over time. It uses six criteria, including numbers of articles published in Science and Nature, numbers of highly cited researchers as measured by Thomson Scientific, alumni and staff winning Nobel and Fields prizes, citations in Science and Social Science Citation indexes, and several others. ARWU chooses 1,000 universities worldwide to analyze. It does not depend on any information submitted by the institutions themselves. Some of AWRU’s criteria clearly privilege older, prestigious Western universities — particularly those that have produced or can attract Nobel prizewinners. The universities tend to pay high salaries and have excellent laboratories and libraries. The various indexes used also heavily rely on top peer-reviewed journals in English, again giving an advantage to the universities that house editorial offices and key reviewers. Nonetheless, AWRU’s consistency, clarity of purpose, and transparency are significant advantages.

The Times Higher Education World University Rankings, which appeared in September, are the newest and in many ways the most ambitious effort to learn lessons from earlier rankings and provide a comprehensive and multifaceted perspective. Times Higher Education gets an A grade for effort, having tried to include the main university functions— research, teaching, links with industry, and internationalization. The publication has included reputation among the research variables and has combined that with analyses of citations, numbers of publications, degrees produced, and other measures. Disappointingly but not surprisingly, there are problems. Some commentators have raised questions about the methodologies used to count publications and citations.

There are a number of inconsistencies — some of the American universities are not single campuses but rather systems included together (examples include the University of Massachusetts, Indiana University, the University of Delaware, Kent State University, and others). This problem increases the rankings of these “systems” unfairly. If, for example, the University of California was included as a system rather than as individual campuses, it would clearly rank number one in the world. Some of the rankings are clearly inaccurate. Why do Bilkent University in Turkey and the Hong Kong Baptist University rank ahead of Michigan State University, the University of Stockholm, or Leiden University in Holland? Why is Alexandria University ranked at all in the top 200? These anomalies, and others, simply do not pass the "smell test." Let it be hoped that these, and no doubt other, problems can be worked out.

A word should be said about the long-awaited National Research Council’s evaluation of American doctoral programs. This study, years late, has been widely criticized for methodological flaws as well as the fact that it is more of a historical artifact than a useful analysis of current reality. Nonetheless, the National Research Council attempted to use a much more sophisticated approach to assessment, including considering 20 key variables relating to doctoral programs. The other rankings tend to use many more arbitrary measures and weightings. Even if total success was not achieved, there are no doubt lessons to be learned.

The U.S. News & World Report’s annual ranking juggernaut continues. Widely criticized in the United States for the constant changes in methodology, overreliance on reputational indicators, and oversimplifying complex reality, it is nonetheless widely used and highly influential. College and universities that score well, even if they grumble about methodological shortcomings, publicize their ranks. And at least U.S. News & World Report differentiates institutions by categories — national universities, liberal arts colleges, regional institutions, and so on. This recognizes variations in mission and purpose and that not all universities are competing with Harvard and Berkeley.

Where Are We?

In the world of rankings as in much else it is caveat emptor — the user must be fully aware of the uses and the problems of the rankings. Too often this is not the case. Solid numbers — which universities are ranked number one and which are 199 — are persuasive to many users. This, of course, is a mistake. It is erroneous not only because of the limitation in the rankings themselves but because the rankings only measure a small slice of higher education. A government should be just as concerned about how a university fits into the higher education system as about its research-based rank. Students should be concerned about the fit between their own interests and abilities as well the prestige of an institution. And few take into account the shortcomings of the rankings themselves.

Railing against the rankings will not make them go away; competition, the need to benchmark, and indeed the inevitable logic of globalization make them a lasting part of the academic landscape of the 21st century. The challenge is to understand the nuances and the uses — and misuses — of the rankings.

Read more: http://www.insidehighered.com/views/2010/11/11/altbach#ixzz2qmdHH2cM

Inside Higher Ed