University ranking

from Wikipedia, the free encyclopedia

University rankings evaluate the quality of research and teaching at universities as well as faculty development based on various criteria . Some rankings provide rankings for each rank sites on, others share universities in different rank groups. University rankings serve as a source of information for high school graduates , students , universities, employers , ministries and other interested parties.


Originally popular in the USA in particular , university rankings have also been established in Germany since the early 1990s . In Germany, rankings were published by Spiegel , Focus , Handelsblatt , Junge Karriere , Wirtschaftswoche , Capital and the FAZ Hochschulanzeiger , among others . There are also global rankings such as World's Best Universities , The Times Higher Education Supplement World Ranking or the Academic Ranking of World Universities at Jiaotong University in Shanghai .

University rankings usually aim to show qualitative differences between individual subjects or subject groups at different universities. Based on this data, decision-making aids for various interest groups (e.g. first-year students, companies, political actors) are to be developed.

Critics rate university rankings as unsuitable for choosing a university due to methodological deficiencies, inadequate data bases and strongly generalizing statements. All of these rankings have strengths and weaknesses, both in methodology and in data collection.


In general, external assessment procedures are repeatedly criticized. There is also organized resistance to university rankings in the USA. Basically, the question that is up for debate is to what extent the freedom of research and teaching is endangered by external assessment. The doubt about the credibility relates both to the methodology of the survey and the ranking as well as to the political interests of the rating agencies.

The supporters of the ranking welcome the competition that arises from the public presentation of quality comparisons that is accessible to the layperson. The rankings of German universities often have a strong impact on the “demand behavior” of students, teachers and researchers. They assume that universities with top positions not only receive a strong influx of the “best” people, but also often the highest inflows of third-party funds. In the case of poorer performing universities, the reverse case comes into force, which should automatically result in certain constraints on quality improvement.

Rankings in which students at the relevant university locations are asked about their satisfaction (which is usually just one criterion among several in rankings), according to critics, do not so much compare the absolute quality of the courses on offer, but only the relationship between the students' expectations on the one hand on the other hand. Departments whose students have high expectations (which are, in turn, also determined by the offer) can do worse than departments where students have lower expectations, but which are met.

Fundamental problems of statistical surveys such as low response rates or poor comparability of the subjects examined (due to different structures, orientations or study concepts) have also repeatedly caused contradictions. Students could pursue a strategy of systematically good ratings in surveys to upgrade their degree and to get their department or university to do well by participating in rankings. While proponents of the university rankings emphasize that a poor result in the rankings must inevitably lead to an improvement in quality, a systematic overvaluation of one's own university can trigger a certain operational blindness , as the lecturers at a university receive no incentives to reconsider or improve, which in turn can lead to a gradual deterioration in the quality of teaching. As a result, a university's ranking results can be in stark contrast to its actual teaching quality. Rankings that carry out online surveys like the Spiegel or Career Ranking are extremely susceptible to this, as they can also be manipulated automatically without problems. In the case of the CHE ranking, it is also criticized that methods for collecting figures and data are not communicated and are not verifiable. Universities also do not know how the CHE came up with these figures. In particular, the comparability of the same key figure at different universities is highly doubted.

The meaningfulness of so-called personnel rankings, in which personnel managers are supposed to rate universities, is also questioned. They often do not evaluate actual performance of the subjects, but rather personal sympathies for study locations, which has been proven, among other things, by studies in which subjects from "renowned" universities did very well in surveys of HR staff even if they did not do so at the universities mentioned were taught.

Criticism of the CHE ranking

The CHE ranking is criticized for the fact that it neither publishes the dataset itself nor a detailed scientific description of the methodology.

Finally, critics point out several weaknesses specifically for the CHE ranking. The criterion “recommendation of professors for a course” has only little meaningfulness, since it is doubtful whether external professors can actually express a qualified judgment about studying at another university. Furthermore, research award winners and some small subjects are not considered at all in the research ranking, which is why the results of the investigation are falsified. In addition, for data protection reasons, the CHE has the survey documents distributed by the colleges or universities. This enables any manipulation by the colleges or universities.

The CHE ranking sets ranking parameters and samples at random. It is criticized that the samples are often too small to deliver meaningful results.

The CHE ranking standardizes multiple answers, e.g. B. in the area of ​​research strength by professors not. As a result, the results are not comparable for the reader and lead to confusion, since a relative comparison is no longer possible - although results are given in percentages. This underlines the insufficient informative value of the key figures offered.

In some cases, there are significant data gaps in the CHE ranking (e.g. number of graduates or average grade), so that the universities cannot be compared. The origin of the data is also not clarified and, in particular, it is not clear how it came about (the average grade can be from the last year or be given as the average over the last x years). This leads to doubts about the other data, e.g. B. Average publications or third-party funding that cannot be assessed or can only be assessed with considerable and expensive effort.

In the CHE ranking it also remains unclear why there were unnamed percentage adjustments and category changes, including in business administration, although the subject was not ranked at all this year. It is also questionable how a proportion of −1% "proportion of teaching to practitioners" or the number of computer workstations for the Fernuni Hagen should be assessed.

The CHE ranking only differentiates between three ranking groups (top, middle, bottom) within which the differences are viewed as insignificant, which gives RWTH Aachen an advantage because the results are presented alphabetically according to city within a group and most users do not it's clear.

Another point of criticism that affects many rankings is the wide variety of assignments between subject areas and subject areas. A subject area can be located at a college or university in many departments. Conversely, a subject area can contain many subject areas. The CHE ranking z. B. assumes only one subject area and subordinates a subject area to it, which in reality is arbitrary due to the situation described in detail. For example, a subject that is not primarily assigned to a technical / natural science subject area, but falls under this, can have significantly more research grants, patents and doctorates, which of course does not correspond to reality, but this is significant compared to other subject areas of universities / colleges is better off.

In 2007, Switzerland decided to withdraw from the CHE ranking because of considerable data and method deficiencies. The AQA also ended its participation in the CHE ranking in August 2007 due to methodological weaknesses . The Association of Historians rejects the ranking as misleading students.

In contrast, the business-oriented Educational Policy Institute rated the CHE ranking in a comparison of 19 international university rankings as "no less than brilliant". In contrast to ranking lists with often questionable indicators, the German ranking actively involves the universities in the creation and thus achieves "high data quality at the institute level". The European University Association also claimed in 2005: "The system used by the CHE to evaluate universities is probably the best model available in the world of higher education".

The selection of examination criteria, weightings and grades of the studied courses of study is increasingly being critically questioned in view of the commercial interests that are perceived at CHE, the Bertelsmann Foundation and the University Rectors' Conference. In the middle of 2012, the DGS called on its members to boycott the CHE university ranking due to "serious methodological weaknesses and empirical gaps".

Criticism of the mirror

Der Spiegel publishesa rankingtogether with McKinsey and AOL. Students at the University of Marburg criticize that neither the data set nor a detailed scientific description of the methodology are published. Thus, the numbers are not understandable. The ranking does not pursue a clearly measurable goal, parameters are set arbitrarily and numbers have been calculated incorrectly. The ranking therefore has no statement.

See also

Web links

Individual evidence

  1. Dominik Rohn, Karsten Weihe : Are rankings inherently arbitrary? Research & Teaching , No. 9/2013, pp. 740–741, online version in Wissenschaftsmanagement Online
  2. Austria no longer takes part in the CHE university ranking - Uni-Studien - ›Education. In: Accessed on June 19, 2012 : “Kohler: The CHE ranking is based on a very small sample size and is therefore only of limited statistical significance. That was the reason not to continue the project. "
  3. [1]
  4. Austria no longer takes part in the CHE university ranking - Uni-Studien - ›Education. In: Accessed on June 19, 2012 : "Kohler: [...] In the next round, the universities will no longer participate via the AQA. We have identified some key methodological criticisms in the CHE ranking. That was the reason not to continue the project. "
  5. "They lead students astray" - university rankings - (No longer available online.) In: Die Zeit , April 15, 2010, archived from the original on April 26, 2010 ; Retrieved on June 19, 2012 : "PLUMPE: A guideline for students makes sense in itself, but you are not creating a guideline, but a kind of Bundesliga table that misleads the students."
  6. Jan-Martin Wiarda: Orientation: In search of the dream university opportunities ZEIT ONLINE. In: ZEIT ONLINE GmbH, November 7, 2007, accessed on June 19, 2012 : “The Association of European Universities had already found in a study in the previous year:» The system used by the CHE to evaluate universities is probably the best model available in the world of Higher education. ""
  7. "The CHE ranking should be abolished" Interview with Clemens Knobloch. - Background - University Policy - Studis Online. In: March 8, 2010, accessed on June 19, 2012 : “Clemens Knobloch: The CHE is about as non-profit as the pharmaceutical lobby, […] In addition, Bertelsmann also has an ideological program: the penetration of all public areas with the spirit of the market, Competition, competition. "
  8. Michalke, M., Naß, O., & Nitsche, A .: More humor and no cats - Bertelsmann brand ranking products. (pdf; 273 kB) In: Network of Power - Bertelsmann. Wernicke, J. & Bultmann, T., 2007, p. 30 , accessed on June 19, 2012 .
  9., "Sociologists no longer want: CHE ranking under fire" from July 5, 2012, leading article with many links on the topic
  10. MAIA HÖDING, M.EIK MICHALKE, OLIVER NASS: "THE MAGIC OF NUMBERS" - THE UNIVERSITY DISTRIBUTIONS OF AOL, MCKINSEY AND SPIEGEL. (pdf; 149 kB) (No longer available online.) In: Student Council Psychology, Philipps University Marburg, archived from the original on August 25, 2014 ; Retrieved June 19, 2012 .