Peer review

from Wikipedia, the free encyclopedia

A peer review (English for peer , peer and review , appraisal, more rarely German : Kreuzgutachten ) is a procedure for the quality assurance of a scientific work or a project by independent experts from the same subject.

In today's scientific community, peer review is of paramount importance in order to assess the suitability of a scientific text for publication and thereby guarantee the quality of scientific publications . The purpose of the peer review process is that scientific articles are subjected to a critical review by experts before they are published. The authors of the reviewed work having to accept any criticism seriously and correct discovered errors or explain why the comments of the reviewers are incorrect before the study published can be. In addition, a scientific claim only becomes a potentially valid thesis , at least in the natural sciences , if it has been successfully subjected to a peer review process.

Many scientific journals use peer review. Likewise, the quality of applications for funding research projects is usually assessed by means of a peer review (see also third-party funding ).

The peer review method is also used for quality assurance in other areas of society outside of the academic world.

Academic-scientific area

Procedure and purpose

In the academic and scientific field, peer reviews of journal articles (and increasingly also of monographs ) are common, in which one or more experts in the relevant field rate the study proposed for publication. Usually the author sends his article as a manuscript to someone responsible (e.g. the editor ) of a magazine or series of publications. If he considers the text to be fundamentally suitable, he selects reviewers who, after checking the content, give a vote on whether the article should be published in the form submitted, returned to the author for revision or finally rejected. These experts, also called reviewers or referees , must not come from the author's environment during the peer review in order to avoid bias . The independence of the reviewer from the object to be assessed is the essential criterion of a peer review; it must be ensured by the editors.

An anonymity of the reviewer is not absolutely necessary, but it is often given. The anonymity serves to enable the reviewer to express criticism and to point out deficiencies in the publication without having to fear the revenge of the author who may be hierarchically higher or in terms of reputation and influence. This is to ensure a thorough and unbiased review regardless of the author and ultimately allow a higher scientific level to be achieved. The principle of anonymity of the reviewer is not undisputed.

Peer review is not intended as a method of detecting plagiarism , falsification or fraudulent experiments. Nor does it mean that the scientific work is free from errors. The reviewer can only check the significance and timeliness of the question, the originality and validity of the solution and the plausibility of the results in the context and point out methodological errors and problems within the scope of his possibilities .

The purpose of the review lies primarily in an assessment of the quality of a submitted manuscript, which provides the editor of the specialist journal with clues as to whether it can be published as an article in it. Due to the large number of scientific journals and specialist areas, the evaluation standards are often very different and are based on the readership and the reputation of the specialist journal. As a rule, the reviewer will evaluate the manuscript according to obvious deficits or improvement possibilities and only occasionally point out spelling errors or linguistic inadequacies. Very detailed reports, including a review of the methods used, are required above all for articles that deal with topics in controversial or prestigious fields (e.g. stem cell research ) or that are of extremely high interest to a large audience (e.g. in Nature or Science ).

In addition to quality assurance, the peer review also serves the purpose of presenting arguments in a reviewed work more conclusively.

Double-blind report

Remain both evaluators and peer-reviewed anonymously, so is of double-blind opinion ( English double-blind review ) spoken. The aim of this procedure is to avoid that the familiarity of the submitter or a possible relationship between the reviewer and the submitter has an influence on the evaluation of his work or that the submitter influences the reviewer. Young scientists in particular can benefit from this process because their contribution (and not their reputation ) is decisive. The authors are then required to avoid passages in the text that could run counter to anonymity (e.g. self-citations in the first person, references to their own research institution). In many cases, however, the authors can still be guessed on the basis of the references, the experimental possibilities, etc., especially if the special field in question is researched by a manageable number of people. For this reason and for other reasons, the authors' names are not covered up in many cases.


According to legend, Henry Oldenburg , first secretary of the Royal Society of London and founding editor of the Philosophical Transactions , which has been published in London since 1665 , as a theologian was unable to adequately assess the quality of submitted essays on scientific topics. He therefore delegated this task to other scientists who were considered to be competent on the respective topic. This procedure was later adopted by other scientific journals. According to Melinda Baldwin, the legend originated in 1971. It goes back to the sociologists of science Harriet Zuckerman and Robert K. Merton , but has almost nothing to do with scientific practice in the Royal Society of the 17th century.


There are approximately 21,000 journals worldwide that use various types of peer review. They publish about 1 million articles annually. However, there are also many scientific journals that only work with editorial reviews .

Due to the quality check associated with the review, peer-reviewed publications have a better reputation than other forms of publication such as conference papers or specialist journals without peer review. The number of such publications is viewed as a measure of the productivity and influence of the authors on a field of knowledge.

Feigning peer review

In addition to journals with real peer reviews, there are also journals that only simulate a quality-assuring peer review, so-called predatory journals . In the face of an increasing number of such electronic open access journals, which often only claimed to be conducting a kind of peer review, the journalist John Bohannon tested them in 2013 with a fake clinical study of a cancer drug that contained very obvious serious errors (among others the promised Authors to treat patients with the drug without waiting for further results). Multiple versions of the study were posted to 304 online journals, of which 255 responded and 106 reviewed. Around 70% (156 in total) accepted the article (journals that are no longer appearing were not included; if these are also taken into account, it was around 60%). Only one journal (PLOS one) carried out a detailed review and then rejected the article because of the serious violation of ethical rules. Bohannon published his results in Science , which interpreted the results as a clear plea for established journals with serious peer review. However, some of the online journals affected came from major international publishing houses. Jeffrey Beall coined the term predatory journals for online journals with dubious practices .


The peer review process has been criticized for several reasons:

  1. It usually takes several months, in some cases even years, for a specialist article to appear.
  2. The neutrality of the reviewers is not guaranteed. There is no guarantee that the reviewers will not use their own point of view on disputed issues as a basis for decision-making.

Occasionally it is criticized that it favors excessive, destructive criticism. Established experts in a branch of science could use unfounded, derogatory reports to prevent competitors from entering their “niche” and would not have to justify themselves by name in the event of anonymity. The anonymity of the reviewers thus promotes " territorial behavior " and hinders efficient quality competition.

Anonymity of the reviewer can lead to assessments that were not drawn up conscientiously enough due to lack of time, insufficient interest or lack of knowledge. In this way, a bad article can be found to be good in the review process without the reviewer having to fear for his good reputation in the scientific community .

The statistician and method critic John Ioannidis , an advocate of peer reviews (he has published around 400 peer-reviewed publications (as of 2008) and is a member of the editorial board of 18 peer-reviewed journals), criticizes that these are suboptimal: Renowned reviewers can Use the peer review process to suppress the occurrence and dissemination of research results that run counter to their findings and thus maintain false dogmas within their research field. Empirical evidence has shown that expert opinions are extremely unreliable.

Peer reviews have repeatedly become the subject of scientific-related conspiracy theories , such as those that have emerged more frequently in recent decades, for example in connection with the denial of man-made global warming : They assumed they were secretly following a political agenda or withholding important points. The American sociologist Ted Goertzel therefore advocates making them more transparent: The composition of the review panels should no longer be anonymous, all the researchers' data should be made accessible to them, and specialists should be given the opportunity to present alternative perspectives, as far as they can were based on an adequate database. Conspiracy-theoretical suspicions against peer reviews can never be completely ruled out.

Vincent Calcagno et al. found in a 2012 study published in Science that articles that were initially rejected by one journal, then submitted to another journal and finally published, tend to be cited more often than other articles in that journal. This can be because the essay deals with a controversial topic or uses a new method that is viewed critically by a reviewer but is still of interest to the professional world.

In 2015, researchers in Nature presented a method for assessing the reproducibility of psychological studies. Reviewers could bet on certain studies in a stock market-based model. This achieved significantly better results than the assessment by individual reviewers.

In 2018, the historian Caspar Hirschi criticized the introduction of peer reviews after 1960 as part of an “unprecedented instrumentalization of science for political-military purposes”, which had made possible an “equally unprecedented commercialization of scientific journalism”. The anonymized expert review process puts a cloak of silence about failed applications . “The efficiency of Peer Review lies in the non-confrontational exercise of power, as the reviewers have no face and the reviewed have no voice. The system silently creates a fait accompli. For commercial magazine publishers, peer review has the double advantage that they outsource the selection work for free and cannot be held liable for the quality of the published content. In the case of fraudulent or incorrect publications, responsibility first falls on the reviewers, then on the editors and only last on the publisher. ”Hirschi is in favor of abolishing peer reviews. The quality control of manuscripts in the journals should be carried out with their own editors , as is the case with book publishers, some of which have high-quality academic series . In state funding agencies, expert committees with authority to make decisions would have to be made up of such a broad spectrum that external assessments by means of peer reviews can be dispensed with when examining applications.

Alternatives to traditional peer review

In connection with the magazine crisis and electronic publishing , new quality assurance procedures are developing. A pioneer in this area is Stevan Harnad . However, his suggestions, which are somewhat reminiscent of a wiki , have not yet caught on, and hardly any empirical values ​​are known about them.

In 2006 a group of scientists from Great Britain started the online journal Philica , in which they tried to solve the problems of traditional peer review. Unlike usual, all submitted articles are published first and the open peer review process only starts afterwards. The reviewers are not selected by the editors, but any researcher who wants to can criticize the article. The reviewer remains anonymous. The reviews are attached at the end of each article and give the reader an assessment of the quality of the work . The advantage of this system is that even unorthodox research approaches are published and cannot be suppressed by established experts, as in classic peer reviews.

A similar project is the Dynamic Peer Review of the Naboj website . The difference to Philica is that Naboj is not a complete online journal, but a forum for reviews of preprint articles from . The system is based on the rating system from and offers users the opportunity to rate both the articles and the individual reviews. As a result, the system has the advantage (with a sufficiently large number of users and reviewers) that the quality is assessed democratically.

In June 2006, Nature began a trial called parallel open peer review . Some articles that were submitted for a traditional review process were also made publicly available for comment in parallel. The attempt was rated unsuccessful in December 2006 and discontinued.

An increasing number of journals now goes to the format of the registered report (Engl. Registered report ) through to scientific misconduct as harking and p-hacking counter. In the case of a registered report, the authors of a study create an application that contains the theoretical and empirical background, research questions and hypotheses, and possibly pilot data. After submitting it to the specialist journal, the application will be assessed before the actual data is collected. In the event of a positive review, the manuscript to be created after the data has been collected will be published automatically regardless of the study results.

Assessment of applications

In the scientific community, peer reviews do not only take place for journal publications, but also for the approval of measuring times at large research institutions and project financing. The donors (national organizations such as the German Research Foundation or the Swiss National Science Foundation , NGOs and private donors such as the Bill & Melinda Gates Foundation ) often use reviews as a criterion for the allocation of funds.

Quality assurance in companies

Companies use peer reviews for quality assurance . Companies that are active in the field of auditing or consulting carry out so-called peer reviews. A project (auditing or consulting project) of a company is checked by an expert or a team of experts from another company in the same industry on the basis of project documents and working papers. These then give an assessment of the quality of the project in an expert report. Choosing an external company as the auditor ensures the independence of the auditor and the examinee to a high degree. The peer review is therefore given more weight to companies in quality assurance than B. an inter-office review (reviewer from another branch) or local office review (reviewer from the same branch).

Regular external quality control (peer review) is now legally required for auditors and auditing firms. Currently, the assessment has to be carried out every three years. An external quality control had to be carried out for the first time by December 31, 2005. With the seventh amendment to the WPO (Professional Supervision Reform Act), the time limit for the certificate of attendance for a quality control carried out for WP / vBP practices that do not check listed companies is being extended from three to six years.

Quality assurance in the health sector

The peer review process is carried out as part of the quality assurance program of the statutory pension insurance . The objective is to ensure process quality in the rehabilitation facilities covered by the statutory pension insurance. This is based on a connection between process quality during rehabilitation and the quality of the medical discharge reports, which has been proven by scientific studies . Specifically, the peer review process means that experienced rehabilitation physicians in the respective specialist area (“peers”) randomly assess selected, anonymized medical discharge reports from other rehabilitation facilities (mostly 20-25 per session) according to specific, previously defined criteria. Six sub-categories that are important for the rehabilitation process ( anamnesis , diagnostics , therapy goals and therapy , clinical epicrisis , social medical epicrisis as well as further measures and follow-up care) are assessed according to the presence of deficiencies (no deficiencies, slight deficiencies, clear deficiencies, serious deficiencies) and with an to awarded number of points (10 points = very good, 0 points = very bad). The summarizing evaluation of the entire rehabilitation process results from the summarizing evaluations of the sub-areas. The peer review process takes place both in the somatic indication areas ( gastroenterology , cardiology , neurology , oncology , orthopedics / rheumatology , pulmonology , dermatology ) as well as for psychosomatic diseases and addictive disorders and should be carried out every one to two years at the instigation of the German Pension Insurance Association be performed.


  • Ann C. Weller: Editorial Peer Review: Its Strengths and Weaknesses. asis & t, 2001, ISBN 1-57387-100-1 (overview of studies on the cross-assessment system from different subject areas from 1945 to 1997).
  • Thomas Gold : New Ideas in Science. In: Journal of Scientific Exploration. Volume 3, 1989, No. 2, pp. 103-112.
  • Gerhard Fröhlich: “Informed Peer Review” - compensation of errors and distortions? In: From quality assurance in teaching to quality development as a principle of university management. University Rectors' Conference, Bonn 2006, pp. 193–204 (PDF) .
  • Gerhard Fröhlich: Peer Review put to the test in science research. In: Medizin-Bibibliothek-Information Volume 3, 2003, No. 2, pp. 33–39 (PDF) ( Memento from January 11, 2005 in the Internet Archive ).
  • Stefan Hornbostel, Meike Olbrecht: Peer Review in the DFG: Die Fachkollegiaten. iFQ Working Paper No 2, Bonn 2007, ISSN  1864-2799 (PDF) .
  • Stefan Hornbostel, Dagmar Simon (Eds.): How much (in) transparency is necessary? - Peer Review Revisited. iFQ Working Paper No 1. Bonn 2006, ISSN  1864-2799 (PDF) .
  • Heinrich Zankl: Forgers, swindlers, charlatans: fraud in research and science. Wiley-VCH, Weinheim 2003, ISBN 3-527-30710-9 .
  • Science between Evaluation and Innovation: A Conference on Peer Review (= Max Planck Forum. Volume 6). Munich 2003 (documentation of a conference of the Max Planck Society and the German Research Foundation ).
  • Hans-Hermann Dubben , Hans-Peter Beck-Bornholdt : Unbalanced reporting in medical science. Institute for General Practice at the University Medical Center Hamburg-Eppendorf, Hamburg 2004 (PDF) ( Memento from January 31, 2012 in the Internet Archive ).
  • Wissenschaftsrat: assessments in the science system . Position paper, Berlin 2017.

Broadcast reports

Web links

Individual evidence

  1. Maria Gutknecht-Gmeiner: External evaluation through peer review: Quality assurance and development in initial vocational training. Springer-Verlag, 2008.
  2. a b Naomi Oreskes , Erik M. Conway : Die Machiavellis der Wissenschaft (Original: Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming). Weinheim 2014, p. XVIII.
  3. Ronald N. Kostoff: Research Program Peer Review: Purposes, Principles, Practices, Protocols (PDF; 852 kB) . Office of Naval Research, Arlington, VA, (Report) 2004, p. 23.
  4. a b Caspar Hirschi: How the peer review disciplines science
  5. Irving E. Rockwood: Peer review: more interesting than you think . In: Choice 44.2007,9, p. 1436.
  6. Caspar Hirschi : Scandal experts, expert scandals. On the history of a contemporary problem . Matthes & Seitz, Berlin 2018, ISBN 978-3-95757-525-8 , The power of an invented tradition, p. 304 .
  7. Robbery publishers harm science . Website of the Leibniz Association . Retrieved December 12, 2019.
  8. Bohannon, Who's Afraid of Peer Review ?, Science, Volume 342, 2013, pp. 60-65, online
  9. Dan Vergano: Fake Cancer Study Spotlights Bogus Science Journals. National Geographic, October 4, 2013.
  10. ^ Alfred Kieser: The barrel ideology of research. Academic rankings. In: Frankfurter Allgemeine Zeitung. June 11, 2010, accessed January 9, 2012 .
  11. Ioannidis, John PA Effectiveness of antidepressants: an evidence myth constructed from a thousand randomized trials? , Philosophy, Ethics, and Humanities in Medicine 3.1 (2008): 14.
  12. ^ John PA Ioannidis: Why Most Published Research Findings Are False . In: PLoS Medicine . tape 2 , no. 8 , March 19, 2017, p. e124 , doi : 10.1371 / journal.pmed.0020124 , PMID 16060722 , PMC 1182327 (free full text).
  13. ^ T. Goertzel: Conspiracy theories in science. In: EMBO reports. Volume 11, number 7, July 2010, pp. 493-499, doi : 10.1038 / embor.2010.84 , PMID 20539311 , PMC 2897118 (free full text).
  14. Ruth Williams: The Benefits of Rejection , The Scientist, October 11, 2012
  15. Reproducibility of studies: The psychologist market recognizes good research. In: Retrieved February 27, 2016 .
  16. Caspar Hirschi: Scandal experts, expert scandals. On the history of a contemporary problem . Matthes & Seitz, Berlin 2018, ISBN 978-3-95757-525-8 , How well does peer review work ?, p. 318 .
  17. Caspar Hirschi: Scandal experts, expert scandals. On the history of a contemporary problem . Matthes & Seitz, Berlin 2018, ISBN 978-3-95757-525-8 , How well does peer review work ?, p. 319-320 .
  18. Caspar Hirschi: Scandal experts, expert scandals. On the history of a contemporary problem . Matthes & Seitz, Berlin 2018, ISBN 978-3-95757-525-8 , Science as a representative public, p. 324-325 .
  19. ^ Official website of Philica.
  20. Official website of nabój.
  21. Overview: Nature's trial of open peer review. In: Retrieved June 11, 2009 .
  22. Promoting reproducibility with registered reports. In: Nature Human Behavior. 1, 2017, p. 0034, doi : 10.1038 / s41562-016-0034 .
  24. ^ Deutsche Rentenversicherung - Peer Review Process. In: Retrieved May 26, 2020 .
  25. ^ Peer Review. In: German Medical Association. Retrieved February 27, 2016 .
  26. (online) ( Memento from October 9, 2010 in the Internet Archive ). (Instead of the anonymity of specialists for assessment, which promotes conformity, Gold demands a science court with scientists from different fields from one faculty)