Impact Factor

from Wikipedia, the free encyclopedia

The Impact Factor ( IF ), or more precisely, the Journal Impact Factor ( JIF ), German impact factor is a calculated number, the amount of the influence of a scientific journal reproduces. It is used for the bibliometric comparison of different journals. The impact factor is not a measure of the quality of the articles in a journal, but rather provides information on how often the articles in a particular journal are cited in other publications.

The impact factor indicates how often, on average, an article published in this journal is cited by other scientific articles per year . In practice, impact factors are often used to assess scientific publication performance.

Definition of terms

Due to several renaming and similar products from different providers, a delimitation of terms is necessary. The term factor (Engl. Impact factor ) generally describes the ability to measure the influence of magazines. The best-known product that follows this idea is the Clarivate Analytics Impact Factor (formerly ISI Impact Factor , later Thomson Reuters Impact Factor ).

The Institute for Scientific Information (ISI) (now part of Clarivate Analytics ) calculated the impact factor of journals for the first time in the 1960s and used it internally in the Science Citation Index . The Impact Factor is now determined from two article databases, the Social Sciences Citation Index (for social science subjects) and the Science Citation Index (for medicine, technology and natural sciences). Both databases are provided by Clarivate Analytics and are also known as the Web of Science . The associated factors are published annually in the Journal Citation Reports (JCR) in two editions ( Science Edition and Social Sciences Edition ). A license fee must be paid for the use of the Journal Citation Reports by academic institutions.

Strengthen

One of the strengths of the impact factor is that it is easy to understand and quickly available. It is recorded centrally and online in the Journal Citations Reports. In addition, publishers use it on their websites to advertise their magazines.

But it is not only a "quantitative assessment parameter, but also a veritable economic and influencing factor: Libraries base their stocking on the IF, governments use the IF to determine the performance of their research institutions, scientists publish in journals with the highest possible IF values ​​and committees in turn, judge the quality of publications according to IF criteria. ”Neuroscientists at the University of Lübeck examined the subjective value of IF for scientists and came to the conclusion that their reward center in the brain is activated in anticipation of a high impact factor .

Review of magazines and scientists

The Impact Factor (IF) is mainly used in the natural sciences and medicine , but increasingly also in other specialist areas. It is not suitable for comparing large specialist disciplines with many researchers and publication organs and thus higher citation frequencies with smaller disciplines. Only citations within one discipline, i.e. from thematically similar journals, should be compared. In addition to the frequency of citation, the average duration for which an article was cited also indicates the long-term significance of individual publications. It results from the half-life of an article ( cited half-life ) also from the ISI. In modern and fast-moving disciplines such as molecular biology, the value for most journals is less than five years. In disciplines such as biological systematics, the journals of which have a longer-term claim, more likely over five, often over ten years. When comparing scientific publication performance, it is therefore legitimate to multiply the impact factor by the value for cited half-life : This compensates for the lower citation frequency in some areas of science due to the longer half-life of the articles.

The magnitude of the impact factor can be seen in two examples from journals in the field of ecology: The Wiley journal Diversity and Distributions had an impact factor of 4.83 in 2011. The journal Ecology Letters , published by the same publisher, had an IF of 17.56 in 2011.

Scientists around the world use the impact factor of publications, particularly in medical and scientific research, in order to quantitatively evaluate research achievements - especially because the number thus determined promises objectivity. An additional bibliometric indicator for the quality of individual research, which avoids some specific problems of the impact factor, is the " Science Impact Index " (SII). It also belongs to the citation rates .

The Google search engine uses a similar approach. For the evaluation of the Internet pages, Google uses an algorithm that is based on the frequency of links (“quotation”); see PageRank . Eigenfactor uses this pattern to determine the most influential journals using the frequency of citations. However, this evaluation can be manipulated.

The impact factor does provide information about the frequency of citations, but not about the “technical” (methodical) quality of a specialist journal. The magazine rating is suitable for this .

There are now several variants of the factor: In addition to the classic 2-year impact factor, Thomson Scientific has introduced a 5-year impact factor. Variants based on Google's PageRank are the Eigenfactor Score as well as the SCImago Journal Rank and the Source-Normalized Impact per Paper (SNIP).

calculation

The Journal Impact Factor (JIF) is calculated for a two-year period using the following formula:

It follows from this: There cannot be an impact factor for a year that has not yet passed. Example: A journal published a total of 116 articles in the years 2006–07 (A), in 2008 all publications of this journal from the past two years were cited a total of 224 times (B), resulting in an impact factor for the journal in 2008 of 1.931 (B / A).

It should be noted that the reference values ​​in the numerator and denominator are different. So z. For example, editorial contributions and letters to the editor are not taken into account for the number of publications, although they are cited and these citations are not excluded.

JIFs are only calculated for journals that are included in the Science Citation Index and the Social Science Citation Index . Many humanities journals do not have a JIF for this reason.

The journals in a subject area are ranked according to their impact factor and can only be compared within one category, not between categories.

criticism

The Journal Impact Factor is controversial. The criticism relates primarily to its use as a quality measure, but also to the basic type of calculation, the inadequate independent reproducibility and the lack of comparability between different subject areas (see above).

Type of calculation

The way in which the JIF is calculated is often viewed critically. Because only a few articles receive the most citations, many other articles are hardly or not at all cited. A citation analysis of eleven science journals (including “Nature” and “Science”) revealed that around 75% of the articles contained therein are cited less often than their IF states. Almost 20% of all publications in "Nature" and "Science" are never cited.

Since the distribution of citations is usually very skewed, the mean value used is not a suitable measure.

Some publishers then announced that they would no longer mention the IF in their magazines and advertising material.

Manipulability

Another point of criticism is the negotiability of the impact factor. For the frequency with which a journal was cited, ISI counts all citations - regardless of whether they concern articles, editorials, meetings, letters or conference proceedings. Which publications are included in the calculation as “articles” in the denominator can be negotiated between the journal and the ISI.

Journals can manipulate their own impact factor, for example by encouraging authors to give preference to their own publications in their references. Premature publications also artificially increase the number of citations and thus the size of the counter.

Whether self-citations, ie citations of one's own work, should be taken into account when calculating the impact factor, as is currently the case, is controversial. There is also the risk that citation cartels will form.

Means for evaluating research performance

The German Research Foundation is very critical of the increasing use of the impact factor to assess scientific quality. For example, “the frequency of citations obviously depends not only on the reputation of a journal or a working group, but above all on the size of the group of scientists who are interested in the topic. Specialized journals have lower impact factors than those with a broad readership; In a small subject, different quantitative standards apply than in a large one. "

The Science Council shares this criticism and demands that “more quality-related rather than quantity-related criteria should be taken into account in performance evaluation”.

The Working Group of Scientific Medical Societies calls for alternatives to the IF and criticizes that the IF is not a suitable instrument for evaluating research performance and that it must be replaced by suitable indicators as quickly as possible.

In the San Francisco Declaration on Research Assessment (DORA) , around 13,000 scientists and organizations protest against the impact factor as a central means of assessing scientific performance and demand: “Do not use journal-based metrics, such as journal impact factors, as a surrogate measure of the quality of individual research articles, to assess an individual scientist's contributions, or in hiring, promotion, or funding decisions. ”The DORA was initiated in 2012 by the American Society for Cell Biology. Their signatories also include universities (including the League of European Research Universities ) and funding organizations such as the FWF .

Nobel laureate Randy Schekman renewed his criticism in 2016 in a lecture at the University of Regensburg : The Journal Impact Factor is an “artificial number” and not suitable for assessing scientific quality: “We have to leave this nonsense behind and develop other methods to address the creative Measure performance ".

See also

literature

  • NM. Meenen : The impact factor - a reliable scientometric parameter? 1997 Trauma Surgery. 23 (4): 128-34 PMID 9381604
  • A. Hakansson: The Impact Factor - a dubious measure of scientific quality. In: Scandinavian Journal of Primary Health Care. 23, No. 4, 2005, pp. 193-194.
  • Ulrich Herb , Daniel Beucke: The future of impact measurement. Social media, usage and quotes on the world wide web. In: Science Management. Magazine for innovation. 19 (4), 2013, pp. 22-25. doi : 10.5281 / zenodo.7696
  • S. Lehrl: The impact factor as an evaluation criterion for scientific achievements - the right to equal opportunities. In: Radiation Therapy and Oncology . 175, 1999, pp. 141-153.
  • Arnd Krüger : Where does German sports science research stand? Impact factor, half-life, actuality and immediacy index , in: Leistungssport 28 (1998), 2, pp. 30–34.
  • SN Groesser: Dynamics of Journal Impact Factors. In: Systems Research and Behavioral Science. 29, No. 6, 2012, pp. 624-644. abstract
  • W. Golder: The Impact Factor: A Critical Analysis. In: RöFo - Advances in the field of X-rays and imaging processes. 169, 1998, pp. 220-226.
  • T. Opthof: Sense and Nonsense About the Impact Factor. In: Cardiovasc Res . 33, No. 1, 1997, pp. 1-7, doi : 10.1016 / S0008-6363 (96) 00215-5 .
  • Per O. Seglen: Why the impact factor of journals should not be used for evaluating research. In: British Medical Journal . 314, 1997, p. 497.
  • M. West: Impactopoly. In: Laborjournal . No. 11, 2006, pp. 40-45 ( PDF ).
  • Vladimir Pislyakov: Comparing two “thermometers”: Impact factors of 20 leading economic journals according to Journal Citation Reports and Scopus. In: Scientometrics. 79, No. 3, 2009, pp. 541-550 ( doi : 10.1007 / s11192-007-2016-1 , PDF ).
  • J. Stegmann: How to evaluate Journal impact factors. In: Nature . 390, No. 6660, 1997, p. 550, doi : 10.1038 / 37463 .
  • Dirk Schoonbaert, Gilbert Roelants: Impact takes precedence over interest. In: Nature. 391, No. 6664, 1998, p. 222, doi : 10.1038 / 34519 .
  • Darren Greenwood: Reliability of Journal Impact Factor Rankings. In: BMC Medical Research Methodology. 7, No. 1, 2007, p. 48, doi : 10.1186 / 1471-2288-7-48 .
  • Borja Gonzalez-Pereira, Vicente Guerrero-Bote, Felix Moya-Anegon: The SJR indicator: A new indicator of journals' scientific prestige , Conference paper, December 2009, ArXiv
  • Petra Heidenkummer: When the opaque becomes a measure: Problems and fluctuations in the impact factor. In: bit online , 16 (2013) No. 3, pp. 201–210 ( PDF )
  • U. Böhme, S. Tesch: The measurement of the specialist literature. Nachr. Chem. 61 (2013) No. 9, pp. 905-908, doi: 10.1002 / nadc.201390279 .
  • U. Böhme, S. Tesch: The dark side of bibliometrics. Nachr. Chem. 65 (2017) No. 10, pp. 1024-1027, doi: 10.1002 / nadc.20174065326 .

Web links

Individual evidence

  1. Robert Czepel: Can scientific quality be measured? . (science.ORF.at), accessed on September 5, 2016.
  2. ^ FM Paulus, L. Rademacher, TAJ Schäfer, L. Müller-Pinzler, S. Krach: Journal Impact Factor Shapes Scientists' Reward Signal in the Prospect of Publication . In: PLOS ONE , November 2015.
  3. Meenen NM The Impact Factor - A Reliable Scientometric Parameter? 1997 Trauma Surgery. 23 (4): 128-34 PMID 9381604
  4. Diversity and Distributions . In: Wiley Online Library . Retrieved November 23, 2012.
  5. ^ Ecology Letters . In: Wiley Online Library . Retrieved November 23, 2012.
  6. Wolfgang G. Stock: The inflation of impact factors of scientific journals. In: ChemPhysChem. 10, No. 13, 2009, pp. 2193-2196, doi : 10.1002 / cphc.200900495 .
  7. http://www.elsevier.com/wps/find/editorshome.editors/biblio ( Memento from August 18, 2012 in the Internet Archive )
  8. Brunel university: SJR and SNIP journal metrics ( Memento of the original from January 9, 2017 in the Internet Archive ) Info: The archive link was inserted automatically and has not yet been checked. Please check the original and archive link according to the instructions and then remove this notice.  @1@ 2Template: Webachiv / IABot / www.brunel.ac.uk
  9. ^ Journal Citation Ranking and Quartile Scores . In: Research Assessment , accessed February 3, 2014.
  10. Brembs B, Button K, Munafò M: Deep impact: unintended consequences of journal rank . In: Front Hum Neurosci . 7, No. 291, 2013. doi : 10.3389 / fnhum.2013.00291 .
  11. ^ Joint Committee on Quantitative Assessment of Research (2008): Citation Statistics . A report from the International Mathematical Union (IMU) in cooperation with the International Council of Industrial and Applied Mathematics (ICIAM) and the Institute of Mathematical Statistics (IMS).
  12. Rossner, M., Van Epps, H., Hill, E .: Show me the data . In: J Cell Biol . 179, No. 6, 2007. doi : 10.1083 / jcb.200711140 .
  13. Ewen Callaway: Beat it, impact factor! Publishing elite turns against controversial metric . In: Nature 535, 210-211 (July 14, 2016).
  14. ^ The PLoS Medicine Editors: The Impact Factor Game . In: Plos Medi . 3, no. E291, 2006. doi : 10.1371 / journal.pmed.0030291 .
  15. Tort ABL, Targino ZH, Amaral OB: Rising Publication Delays Inflate Journal Impact Factors . In: PLoS ONE . 7, no. E53374, 2012. doi : 10.1371 / journal.pone.0053374 .
  16. PDF document Safeguarding Good Scientific Practice (German Research Foundation, memorandum adopted on July 3, 2013), accessed on January 9, 2015.
  17. PDF document "Recommendations on Scientific Integrity" (Wissenschaftsrat, Position Paper 2015), accessed on July 7, 2016.
  18. AWMF position paper on the evaluation of medical research performance (GMS Ger Med Sci 2014; 12: Doc11).
  19. By Louisa Knobloch: Small molecules, great science. (Mittelbayerische Zeitung, March 10, 2016), accessed on July 7, 2016.